I kinda doubt that they think they are genuinely helpful. People get payed a lot for high profile bug bounties. If the AI makes a discovery or the people who check if the report is any good are bad at their job, then you could make a ton of money for minimal effort. I think itβs the same sense of entitlement that fuels AI Art. People who use AI feel entitled to the results of hard work, and do not value any of the in between points, even if they are very important (such as βchecking if your discovery is uselessβ or βimparting your art with your own point of viewβ). I donβt think we should become Luddites and trash all technology which makes life easier, but we REALLY need to both value things which do not sound obviously worth it, and yes, become Luddites. The Luddites did not fight out of fear of technology, but of loosing their jobs. If the current system rewards companies for getting rid of employees, and punishes people for not being employed, we need to seriously fix that, or break the machines until those with power need to fix the misaligned incentives.
You havenβt read enough different topics recently. You have visited only 20 unique topics, instead of the required 23.
AI ββartββ is a dead end tech, so weβll be better off ending it once and for all anyways.
Also I now am in BBREβs land.
How is it a dead end tech? It has been improving a lot in the past year, though most of the biggest improvements are by locked down AI companies that want to control (and get paid) from their AIβs usage.
Also as a programmer, and not an artist, I see potential. For example, before AI art I could not have made a game all by myself, but now if I used AI art I think I could make an entire game by myself.
The βOuroboros effectβ is expected to have the programs feeding these gen-ai training data find enough ai-generated ββartsββ that the errors of these found ai slops would then bleed and build up in the new models. Essentially, the gen-ai industry, or at least the ai art branch of it, would eat itself.
All the AI companies already have massive datasets and they can either filter out any data after 2021, or spend financial resources on filtering through the data sets to discard any AI generated content from it.
The financial resources are the result of a bubble and are showing signs of exponential change as far as the effort required to filter out AI art. AI only improves if itβs training data is on average better than itβs output, and the cost to filter out AI from training data is proportional to how good the AI gets. Itβs a feedback loop. Without venture capitalists stuck in a reversed hype-based feedback loop, the money runs out and the AI canβt get better. Eventually, either we spend literal trillions to make GPT 6, or the bubble pops. The earlier it pops, the better for the economy. If we do spend literal trillions, WHAT NEXT??? For the people who say AI will get cheaper as it becomes able to improve itself and/or the methods we use to improve it, I can easily pit them against the people who say once AI can improve itself we all die and say I find the latter slightly more convincing.
I think bubble will burst in no more than a year from now seeing the comparison with the dotcom bubble.
There is some progress that can be made even without acquiring new training data by tweaking the training process or the model architecture and how it is used. Still, I agree that it is much slower progress that way than by getting more and more training data and increasing model sizes, which could also lead to the bubble bursting when the expected progress is not as fast as was promised to the investors.
Edit: also I need to post this:
Thereβs also a big issue with the way the training data is acquired from itβs creatorsβ¦
Such as using comments from Reddit. Yes. This is true. Very recently, Reddit agreed to have its comments used for training AI, which is causing a lot of peoples comments to be randomly deleted on the site despite those comments not violating any Reddit policy. ![]()
And sites like that are wondering why they are losing usersβ¦
An interesting article on Screen Addiction and how to possibly tackle it.
People a few dozen decades from now will have a better idea of the true effects of these technologiesβ¦
CCTV footage of the 7.7 Magntiude Earthquake that occurred in Myanmar during March shows for the first time curved slipfault ruptures do exist by splitting a hill in half.
https://www.science.org/content/article/watch-earthquake-split-hillside-two
Well that is certainly terrifying.
Shhhh, donβt give them ideasβ¦
opera GF could also stand for opera get firefox
Isnβt librewolf better now?