If you write policy about AI you're doing it wrong. AI is an implementation, but policies must be written for outcomes.
Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.
Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.
lm28469 1 days ago [-]
> all the negative social impact of AI can be achieved without it,
With the big differences being massive automatisation, huge reduction of cost and no one to blame when things go wrong... It's like saying a nuke and a knife are the same because they both kill
ndriscoll 20 hours ago [-]
Someone is to blame for approving the use (or unapproved use) of a tool/process that breaks the law, same as today. When I worked in a regulated industry, we kept records of all inputs and decisions made and an auditor would do random checks that the results matched our documented methodology (afaik that documentation was submitted to/approved by either an auditor or the regulator).
andy99 22 hours ago [-]
Agreed, I think it’s more a lens (one of many) that helps show what’s possible with technology and what may require legal protections.
For example things like privacy and surveillance laws obviously need updating in the face of advances in networking, data collection at scale, etc. Same with copyright in the face of plentiful copying.
But good laws will as you say address what is now possible or dangerous, as opposed to any specific implementation or general purpose technology involved. The tech just sets the context for what protections are needed.
khafra 1 days ago [-]
One outcome which is not unique to AI, but fairly exclusive: The value of human cognitive labor eventually drops below subsistence income. This isn't here, yet; but it's a hard problem so we should be devoting substantial resources to solutions before it hits.
ludicrousdispla 1 days ago [-]
"No one in this world, so far as I know — and I have researched the records for years, and employed agents to help me — has ever lost money by underestimating the intelligence of the great masses of the plain people. Nor has anyone ever lost public office thereby."
Oh, so we can't address any specific problems with any technology, because we should actually be fixing all of society at the root of all those problems. So while you wait for our broken political system to solve those root causes, enjoy feeling smug about not having implemented any imperfect, temporary bandaids to stop some bleeding.
Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?
Your commitment to the long run will lead to paralysis and do nothing in the long run.
danpalmer 1 days ago [-]
If there are problems that are specific to AI then sure we should legislate about it. For example defining what "fair use" is for AI training, that's a clearly new area.
But most of the pushback I've seen to AI in policy is so over-fit to current AI that it would be trivial to work around it. You can argue that we'd be letting perfect be the enemy of good, but I think we'd be making policies that will be out of date by the time they even make it into law, and that we'll never make any progress at all.
That said, I'm all for being proven wrong. The US tends to write highly specific legislation so I'm sure it'll try a few of these. The EU tends to write much more vague legislation specifically for this reason. We'll see how they end up working.
cudgy 19 hours ago [-]
“For example defining what "fair use" is for AI training, that's a clearly new area.”
I am not a patent attorney, but it seems like a clear violation of copyright. Based on your comment above regarding the breath and focus of laws and the fact that you feel the copyright law was not well specified for the AI situation. How could the current law have been written such that it would’ve handled the AI situation and avoided this mess that we’re in now?
My guess is none of it matters because now the AI is so important and so critical in the minds of many government leaders and business leaders that any violation of copyright will be excused, making the original law, meaningless in this situation, and undercutting this entire discussion.
lm28469 1 days ago [-]
> So while you wait for our broken political system
Yeah we better let these important topics in the hands of very stable people like Musk or Thiel, they for sure know what the people want
> make more money in the short run if governments don't do any tech regulation in the short run?
"Money money money money", homo sapiens decerebration under capitalism is quite something to witness. Maybe just maybe there is more to life than raw productivity and money... The root causes you're talking about are greed and an unbound quest for "progress", piling more in top will certainly not help
cudgy 19 hours ago [-]
So how does the a country protect itself? If it opens markets, then it becomes at the whim of the global marketplace, which is completely greed, focused, ultra capitalism. We end up with what we have now in the US: lots of cheap crap, but poor income growth.
If it closes its markets and creates an insular market that provides workers decent pay and focuses on the citizens by having a self reliant economy that minimally require inputs or outputs from other countries, what is stopping companies from leaving the country? Capitalism or at least pure capitalism with open markets appears to be not working for the vast majority of the population of the worldor at least unable to be reconciled with the disparities between different countries. The only groups that appeared to benefit or gain improvements are those at the bottom, because they can be easily exploited while at the same time feeling like they’re making more gains economically. Once this group’s wages reach the level that is higher than the another group, the cycle repeats all the corporations rotate to the new low cost region, causing all sorts of disruption, etc..
culll_kuprey 19 hours ago [-]
I’ve come to understand there’s no point arguing. These people are as amoral and unscrupulous as they dogs they worship.
robbrown451 1 days ago [-]
I'm having trouble understanding what they want to "upskill" those people to do.
What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.
As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.
tintor 1 days ago [-]
'main hard part of making a capable robot is the AI part'
Robots are far behind.
Mechanical hands with human equivalent performance is as hard as the AI part.
Strong, fast, durable, tough, touch and temp sensitive, dexterous, light, water-proof, energy efficient, non-overheating.
Muscles and tendons in human hands and forearms self-heal and grow stronger with more use.
Mechanical tendons stretch and break. Small motors have plenty of issues of their own.
AndrewKemendo 1 days ago [-]
And your claim is that those will never be solved?
As a professional robotics engineer I can tell you for a fact they are coming soon.
WastedCucumber 1 days ago [-]
There's nothing in that post claiming those problems will never be solved. I understand the claim as
"the hardware conponent of robotics needs more work and this will take some time, compared to AI capabilities/software" Or soemthing like that.
Maybe you could clarify what your experience on the matter is, how the state of th art looks to you, and most of all what timelines you imagine?
AndrewKemendo 18 hours ago [-]
Just look up “fine dexterous manipulation with pressure feedback” to see the SOTA for dexterous manipulation
There’s at least a half dozen products, two recently from Unitree and Allegro announced.
Rodney Brooks wrote about the challenges - but frankly it was a submarine piece for his work
you are talking high cost emvironments, at least for the moment?
Come on... show me a robot that can run a farm that grows organic produce at an affordable price. It is the lowest wage job out there. Automating it would make prices far out of range for the 99% - but the billionaires could care less?
Jensson 1 days ago [-]
You need an AI to do that, affordable robots are already here but the intelligence is not.
robbrown451 1 days ago [-]
For most things they don't need to be "human equivalent." I'd be willing to be the current crop of robots we're seeing could do most tasks like vacuuming, cooking, picking up clutter, folding laundry and putting it aways, making beds, touch up painting, gardening etc. It seems to be getting better very fast. And if mechanical tendons break, you replace them. Big deal. You don't even need a person to do the repair.
visarga 1 days ago [-]
I don't think "replaced" is a good word here.. augmented and expanded. With AI we are expanding our activities, users expect more, competition forces companies to do more.
But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.
I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs
lm28469 1 days ago [-]
> I don't think "replaced" is a good word here.. augmented and expanded. With AI we are expanding our activities, users expect more, competition forces companies to do more.
Daily reminder that the vast majority of value generated by productivity boost brought by technology in the last 50 years doesn't benefit the workers
Agree for almost all jobs, but some, like my fathers, was about crawling inside huge metal pieces to do precision machining. For unique piecework, it might not be economical to train AI. Surely equivalents to this exist elsewhere
MatekCopatek 1 days ago [-]
It's hard to read this without being cynical.
How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?
protocolture 1 days ago [-]
Am I the US Government in this scenario?
SpicyLemonZest 1 days ago [-]
If Volkswagen's competitors ran around saying that cars aren't dangerous and there's no need to regulate them, and their critics insisted that you're a mark if you accept the premise that cars are a useful transportation method at all, I don't suppose I'd have a choice but to take it seriously. If you know of a similar analysis from a less conflicted group I'd love to read it!
blibble 1 days ago [-]
> How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?
they more or less wrote the EU emission regulations
the only reason diesel cars were sold in huge numbers in the EU
i see the comments here are pretty cynical about this post, and probably for good reason. especially "you might have to start taxing consumption instead of income because people won't have income anymore"
but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.
cudgy 19 hours ago [-]
AI companies are in a difficult position right now. Anthropic is taking the lead by looking like they care and are concerned about the effects of the technology that their feverishly building.
I don’t trust them. Their strategy is to say “don’t worry about all your jobs being taken by our technology. We (AI companies) are going to be taxed so much that you are going to be living a wealthy and fruitful life making meme photos and looking at AI porn. Don’t be concerned about how you’ll pay your bills. We’ll work it all out. Trust us.”
eucyclos 1 days ago [-]
I've found large entrenched players tend to prefer slightly more than a reasonable amount of taxation and regulation in any industry; governments are easier to predict and handle than scrappy competitors.
varispeed 1 days ago [-]
> its interesting that the ai companies are already saying that.
This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.
I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).
buu700 1 days ago [-]
Personally, if I were going to publish something like this as a leader of a major AI company today, I would actually try very hard to put together a good faith proposal that I genuinely believed to be in the best interests of the public.
I can't speak to this particular proposal or the motivations behind it, but I think my approach is the smart play in the present circumstances. Why publish something brazenly self-serving that will at best be forgotten two weeks later, or at worst be added to the list of reasons a bunch of people have to hate you, when you could instead earn some goodwill as a benevolent thought leader and maybe get some academics and politicians to come out of the woodwork backing your ideas?
If the industry is successful and a particular player doesn't fall behind the competition, they're going to be making obscene amounts of money regardless. Better to have a happy and successful public that can't imagine life without you than a public in Great-Depression-like conditions that wants you dead and will only vote for politicians who campaign on banning your product.
As an aside, I'm not sold on the idea of taxes that specifically increase the cost of AI. I don't think it's wise to disincentivize AI usage or artificially inflate costs. (That would particularly hurt anyone with use cases that aren't connected to immediate profit.) If AI has the impact most of us would like it to have, the economy will become way more productive and the public will get its share of that through corporate taxes anyway. I'd rather just close tax loopholes and start laying the groundwork for a future system of distributing resources in a post-employment world.
My current preference is a guaranteed educational/training stipend for any unemployed adult who wants one, and changing the standard career advice for the next generation from "learn to code" to "learn to startup". Looking forward a decade from now, if employment as we know it is scarce, but the economy is flush with capital and automated labor is dirt cheap, it seems to me that self-employment will reemerge as the dominant career path — and anyone who can't raise funding for their business (or acquire grants for their research) will simply need to keep leveling up until they can. Maybe eventually we'll have the resources to transition to a full UBI, but in the meantime, we'd need a transitional system that could provide for the unemployed masses without incentivizing everyone else to suddenly quit jobs that were still necessary. Just my 2c.
cudgy 18 hours ago [-]
“My current preference is a guaranteed educational/training stipend for any unemployed adult who wants one, and changing the standard career advice for the next generation from "learn to code" to "learn to startup".”
I agree with this sentiment in the short term for people that have coding or startup skills already. We may need to ask ourselves at some point. Why work for a company when I can use AI to create a competitor to my employer in two months?.
However, this is not a long-term solution as not everyone can be a startup. Startups fail at a huge rate and they’re gonna fail even more and more startups and more people are competing to be startups. Startups don’t pay money until they start making a profit which could be years, so it’s not a legitimate replacement for a current position. This seems like a very, very competitive low, low cost of entry race to the bottom type of market so many of the benefits may quickly disappear.
buu700 17 hours ago [-]
I think it could be a pretty reasonable system. The idea is that universal guaranteed stipends would become the ultimate backstop: almost a UBI, but targeted at those with actual need for it while requiring something of social benefit in return. I'd imagine that under this system the average person would live off of stipends indefinitely, which is fine because acting as a redundant store of useful knowledge is valuable to society in and of itself.
If someone runs a startup that isn't providing a livable income and they don't have savings to live off of, that startup shouldn't be their full-time job. Of course startups aren't for everyone, just as coding isn't, but there are many other forms of self-employment. Even so, I'd imagine successful startups to be far more common than today in such an environment — if not by percentage, at least by absolute numbers. A world of cheap and abundant capital with engineering and physical labor available at a fraction of the cost of human employees would be an entrepreneur's dream.
cudgy 15 hours ago [-]
We are far from UBI though. It will take major league arm-twisting to get the government to take care of citizens like that. The oligarchs want it all, and it’ll take some serious work to overcome their resistance to increasing their taxes for UBI.
Also, AI may be more capable by the time we even get there if we ever do and AI may be a better entrepreneur than a human. Once that happens, look for the cost of AI to go sky high and access to it highly restricted and only available to the elite.
danaris 20 hours ago [-]
> Why publish something brazenly self-serving that will at best be forgotten two weeks later, or at worst be added to the list of reasons a bunch of people have to hate you, when you could instead earn some goodwill as a benevolent thought leader and maybe get some academics and politicians to come out of the woodwork backing your ideas?
For the same reason that the tech execs do all the other terrible things they do: because they want to own e v e r y t h i n g, and know that they can't do that by acting in good faith.
They want to be the new feudal overlords, and care much less about "goodwill" than they do about making it seem inevitable that they will be the gatekeepers of all thought and labor.
The more they can convince you, the people, and the policymakers that this "AI revolution" is real, and not just a bubble, the less likely everyone is to see through their exaggerations, misdirections, and outright lies to the fact that LLMs are not, and are never going to become, AGI. They are measurably not replacing any significant number of workers. They cannot do our jobs.
blibble 1 days ago [-]
they seem to have omitted the scenarios where the newly unemployable electorate turn on them
mrshadowgoose 1 days ago [-]
I used to think that "AI operating in meatspace" was going to remain a tough problem for a long time, but seeing the dramatic developments in robotics over the last 2 years, it's pretty clear that's not going to be the case.
As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.
At a certain point, "turning on them" becomes physically impossible.
jononor 1 days ago [-]
What development have happened in robotics over the last 2 years, that changed your mind?
I am familiar with general industry trends in electronics over the last two decades, as well as the 1 decade in machine learning.
wmf 1 days ago [-]
The higher taxes proposed here could be used to buy off the electorate.
1 days ago [-]
musicale 1 days ago [-]
The most immediate impact might be the bursting of the AI bubble and a dotcom-like crash of tech stocks and businesses like Anthropic.
Financial circularity could also lead to instability.
Mistletoe 1 days ago [-]
This is actually the most likely outcome but it's the quiet part an AI company isn't going to say out loud.
frozenseven 1 days ago [-]
>bursting of the AI bubble
I hope people will eventually revisit these predictions and admit they were wrong.
Jensson 19 hours ago [-]
There is an AI bubble just like there were a dotcom bubble, they are not wrong everyone know there is an AI bubble even Sam Altman says that AI is currently a bubble. The question is how much value AI gives after the bubble has popped, but there is a bubble.
frozenseven 12 hours ago [-]
AI is undervalued & I don't give a shit what Sam Altman has to say about anything.
throw-10-13 1 days ago [-]
Seem’s like the crypto gifters and moon boys have found a new home.
mjbale116 1 days ago [-]
anthropic are so sure about the incoming economic impact of their AI that they want to start talking about policy - for our sake.
Incredible stuff...
swoorup 23 hours ago [-]
This smells of regulatory capture.
remarkEon 1 days ago [-]
ctrl + f for "immigration" returns nothing.
Not serious, not worth reading.
vkou 1 days ago [-]
This is a horn that must be harped on, frequently and loudly.
Anyone with anxieties over immigration should have those same concerns over AI, many times over.
Skilled immigrants just got a $100,000/year head tax in the US. Where is such a tax for AI?
remarkEon 1 days ago [-]
100% and the anxieties are related. If AI is going to start cannibalizing entire classes of employment, then what's the point of high levels of immigration to "support the job market"?
AndrewKemendo 1 days ago [-]
Much like the end of history wasn’t the end of history
LLM-Attention centric AI isn’t the end of AI development
So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.
Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
blind_tomato 1 days ago [-]
> Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
Could you elaborate more on this? FYI fully agreed on the former sentences.
icandoit 1 days ago [-]
The bacteria that aren't antibacterial resistant eventually get replaced by those that are.
Maybe you are alcohol, gambling, and pornography resistant but maybe you have friends and family that aren't. Are you picking up their slack?
What circumstances make "going Amish" look, not just reasonable, but necessary for survival?
AndrewKemendo 1 days ago [-]
It’s taken for granted that humans are the best choice for how to accomplish tasks.
The tasks humans are best at now are different than 10kya.
The world changes, new human jobs are made and humans collectively move up the abstraction chain. Schumpeter called this creative destruction and “capital + technology” is the transition function.
At the point where “capital + technology” does not need a human anymore and that will happen (if not in my lifetime then at least in the next 500 years) then there will be nothing more to argue for or retain.
So unless humanity recognizes this and decides to organize as humans (not as europeans, or alabamans or han etc…) then this is the only possible outcome.
Me personally, I don’t think that’s mathematically/energetically possible for humans to do because we’re not biologically capable of that level of eusocial coordination.
cudgy 18 hours ago [-]
“So unless humanity recognizes this and decides to organize as humans (not as europeans, or alabamans or han etc…) then this is the only possible outcome.”
Why do you think this is the only possible outcome? Aren’t we already organized as humans? Won’t people revolt when this really hits the fan?
AndrewKemendo 18 hours ago [-]
No we’re not organized as humans
Revolt is a transitional process only, from one structure to another, it doesn’t change the fundamental math of the fact that humans are not eusocial
cudgy 15 hours ago [-]
So humans are just running around as unorganized individuals, hermits, living in small tents in the woods by themselves. There are no borders or countries or associations of humans. We are all just unorganized. Really?
Either your definition of organized is different than mine or this is a silly conversation.
AndrewKemendo 15 hours ago [-]
Yes clearly it’s different definition
Organization means everyone is in the same singular organizational identity - eusocially
The closest formal group seems to be the European Union - but that’s still infinitely far away from what’s needed to survive
Humans either figure out how to form a biological superorganism or go extinct to non-human intelligence
This isn’t happening tomorrow, but on century time scales, it’s obviously the only likely trajectory
cudgy 12 hours ago [-]
OK. Now I understand where you’re coming from. You are using an analogy to insect groups with the term eusocial, which is defined as “cooperative brood care (including care of offspring from other individuals), overlapping generations within a colony of adults, and a division of labor into reproductive and non-reproductive groups.”
Not sure I would use the word organized to describe this though. It actually sounds more like hunter-gatherer society / commune / family. It does seem unlikely that this could happen on a global scale though. It’s more likely to occur in smaller groups, because without some familiarity between the people, they’re unlikely to open up with such personal activities like child rearing.
Anyway, I like your idea. Humans coming together to ensure fairness is going to be necessary. I just don’t think it’s a realistic expectation to expect this at the global scale.
What may be feasible is for people with similar occupations joining together in global labor unions for leverage against the corporations. These unions could have standards for how workers interface with corporations, especially global corporations utilizing AI and or other technologies that impact society in a potentially harmful way.
AndrewKemendo 7 hours ago [-]
Yes you nailed it! Thanks for the thoughtful response.
> I just don’t think it’s a realistic expectation to expect this at the global scale.
This is exactly my point. We dont have the biology for it - mammals don’t have eusocial traits because we’re too complex and egocentric, to the extent that game theoretic defections are individually risky but can have individual benefits.
A group of soldier ants can’t start their own colony because they physically cannot reproduce without a heirarchical queen because they are effectively sterile.
Dunbar number limits the possible social interactions at the depth you describe to 150-250 people at the most. That’s your tribal limit and it’s seen in extant hunter gatherer groups as you describe
While your ideas are valid about global labor cooperation, ultimately it’s stymied by the limitations of the cerebellum size, and you’re back to where you started.
Note that we already tried the hunter gatherer thing for about 250ky and it got overrun by transacional colonialism.
If you want to read my theory work on this here are some resources - note though it’s a lot of reading:
As some Australian video game character once said: “as long as there are two people in the world, someone is going to want someone dead”
Nasrudith 1 days ago [-]
I highly doubt there is ever going to be "effective governance" of technology for multiple reasons. It would require impossible foreknowledge of the impacts of every possible tech and dystopian levels of control to prevent new ideas from coming into play. We cannot even get the direction of impact right a-priori. Even if they had that this dystopia would have to remain both stable and unmutated over generations. All the while their control creates their own counterforce, incentivized to invent tech outside of their control to topple the forces of stagnation.
Without any modifications - MOOCs have single digit completion rates. This is high quality, free, publicly available educational material.
The vast majority of people do not simply have the time, money, or undivided attention - to get a new domain under their belt.
This is “help miners learn code” territory.
sateesh 21 hours ago [-]
What other option you can propose. This article [1] says preferred suggestions by economists is: retraining, regulation, or social insurance and for most of the people surveyed "retraining" was the preferred approach.
Not sure MOOCs can be taken as an useful alibi to measure success of upskill. Most (employers) won't honor the MOCC certs, and people do MOOC while working. Taking a MOOC doesn't inherently ensure that the learner has mastered the course they took, hence there is less incentive in completing too.
Then There are NO options. We will have to live with that world and having no future.
I’m aware that this level of nihilism is difficult for many to stomach - but it’s only nihilism if you believe in fairy tales.
If we are talking about reality then we have to deal with the impossible challenges we are facing.
The fact is that retraining will not work - nor is this the first time it’s been named as a hope up in the past 25 years. (“teaching miners how to code” comes from the last time this was a big hope in America.)
If it helps you feel better
- I said unmodified MOOCs.
With some changes, you can increase MOOC completion rates - but there isn’t enough lift even after that.
I used to be a champion for education initiatives to up skill workers - from a time before MOOCs.
The failure of MOOCs was the end of that hope because it showed there was a gape between the ideal and reality.
People simply can’t retrain like that.
sateesh 19 hours ago [-]
Not that I completely disagree with you, yes indeed with excellent training not sure how many can be retrained to be good enough for new kind of job roles, skills. Also it isn't certain that there will be far more new jobs than what would get shed, in which case there won't be enough demand to absorb even the retrained (and skilled) labor.
cudgy 18 hours ago [-]
Serious question. What should people retrain in? What jobs or training will provide long-term income potential? You don’t want to spend four years retraining for a job or career that only lasts for two years or seven years. My point is that let’s say a new technology comes out or new occupation. How long will it take for the AI and the robots to be able to perform that occupation? Can people train and actually work in that career before the robots take it over?
Is AI the proverbial apple in Adam and Eve? Are we justifying taking a bite of it just because it’s there? Are we helpless and unable to defend ourselves against it? My worry is that thoughts like these and questions like these are going through young people and their decisions about how to provide and proceed into a career. Are we headed towards learned helplessness?
intended 14 hours ago [-]
Sadly my friend, the fact is that we are hosed.
Retraining is a pipe dream which will be sold for another 5 years, till most people are underemployed.
This is what happened to factory workers, and to an extent is going to happen to knowledge workers.
The real menace is hidden in the details though - knowledge work is assumed to have one core component - information. Accurate information.
In reality it has two - emotional salience and informational accuracy.
LLMs generate content- I foresee a future where people are underemployed as output verifiers. So a PhD in physics helps you QC an LLM.
The only job left is to own a firm, but even that will be closed because you will either be selling to capital owners or the majority of humanity.
The only hope of technology is that it creates a revolution which upends the preexisting incumbents. But the issue here is the under employment.
hackable_sand 9 hours ago [-]
Can I also get a PhD in Hacker News Commenting?
intended 1 hours ago [-]
Well, considering that your comments are used to feed these systems, I think its more of creating a hacker news commenters union.
watwut 1 days ago [-]
> "you might have to start taxing consumption instead of income because people won't have income anymore"
Proposal written by billionaire trying to shift taxation even more away from themselves and even more to everyone else.
> Accelerate permits and approvals for AI infrastructure
Oh, they want that? Who would not say.
cudgy 18 hours ago [-]
They want to speed up the process of getting laws written to protect them and AI. One way to do that is to appear like you’re looking for a solution while at the same time mentioning how urgent things are and how we need to pass laws quickly. You can guess how those laws will be focused, but my guess is it will be focused on benefiting the AI companies and companies that plan to use AI companies to build their companies.
The reasons for that will be proposed as protecting the citizens from the evil other country that’s building AI. “Without strong AI, we can’t build weapons to defend the country.” and “without strong AI, our companies won’t be able to compete in the world marketplace.”
Discrimination by law enforcement, exclusion from loan approval, bad moderation on social networking, cheating on exams, creating fake news or media about people, swallowing up user data... all the negative social impact of AI can be achieved without it, and much of it is already illegal anyway.
Legislation that is predicated on AI will fail in the long run. Legislation that focuses on the actual negative outcomes will stand the test of time much more.
With the big differences being massive automatisation, huge reduction of cost and no one to blame when things go wrong... It's like saying a nuke and a knife are the same because they both kill
For example things like privacy and surveillance laws obviously need updating in the face of advances in networking, data collection at scale, etc. Same with copyright in the face of plentiful copying.
But good laws will as you say address what is now possible or dangerous, as opposed to any specific implementation or general purpose technology involved. The tech just sets the context for what protections are needed.
>> https://en.wikiquote.org/wiki/H._L._Mencken
Are you working on fixing those root problems? Or after dismissing short term policy bandaids, are you going to go back to working in an industry where you will probably make more money in the short run if governments don't do any tech regulation in the short run?
Your commitment to the long run will lead to paralysis and do nothing in the long run.
But most of the pushback I've seen to AI in policy is so over-fit to current AI that it would be trivial to work around it. You can argue that we'd be letting perfect be the enemy of good, but I think we'd be making policies that will be out of date by the time they even make it into law, and that we'll never make any progress at all.
That said, I'm all for being proven wrong. The US tends to write highly specific legislation so I'm sure it'll try a few of these. The EU tends to write much more vague legislation specifically for this reason. We'll see how they end up working.
I am not a patent attorney, but it seems like a clear violation of copyright. Based on your comment above regarding the breath and focus of laws and the fact that you feel the copyright law was not well specified for the AI situation. How could the current law have been written such that it would’ve handled the AI situation and avoided this mess that we’re in now?
My guess is none of it matters because now the AI is so important and so critical in the minds of many government leaders and business leaders that any violation of copyright will be excused, making the original law, meaningless in this situation, and undercutting this entire discussion.
Yeah we better let these important topics in the hands of very stable people like Musk or Thiel, they for sure know what the people want
> make more money in the short run if governments don't do any tech regulation in the short run?
"Money money money money", homo sapiens decerebration under capitalism is quite something to witness. Maybe just maybe there is more to life than raw productivity and money... The root causes you're talking about are greed and an unbound quest for "progress", piling more in top will certainly not help
If it closes its markets and creates an insular market that provides workers decent pay and focuses on the citizens by having a self reliant economy that minimally require inputs or outputs from other countries, what is stopping companies from leaving the country? Capitalism or at least pure capitalism with open markets appears to be not working for the vast majority of the population of the worldor at least unable to be reconciled with the disparities between different countries. The only groups that appeared to benefit or gain improvements are those at the bottom, because they can be easily exploited while at the same time feeling like they’re making more gains economically. Once this group’s wages reach the level that is higher than the another group, the cycle repeats all the corporations rotate to the new low cost region, causing all sorts of disruption, etc..
What skills won't be replaced? The only ones I can think of either have a large physical component, or are only doable by a tiny fraction of the current workforce.
As for the ones with a physical component (plumbers being the most cited), the cognitive parts of the job (the "skilled" part of skilled labor) can be replaced while having the person just following directions demonstrated onscreen for them. And of course, the robots aren't far behind, since the main hard part of making a capable robot is the AI part.
Robots are far behind.
Mechanical hands with human equivalent performance is as hard as the AI part.
Strong, fast, durable, tough, touch and temp sensitive, dexterous, light, water-proof, energy efficient, non-overheating.
Muscles and tendons in human hands and forearms self-heal and grow stronger with more use.
Mechanical tendons stretch and break. Small motors have plenty of issues of their own.
As a professional robotics engineer I can tell you for a fact they are coming soon.
Maybe you could clarify what your experience on the matter is, how the state of th art looks to you, and most of all what timelines you imagine?
There’s at least a half dozen products, two recently from Unitree and Allegro announced.
Rodney Brooks wrote about the challenges - but frankly it was a submarine piece for his work
https://rodneybrooks.com/why-todays-humanoids-wont-learn-dex...
Come on... show me a robot that can run a farm that grows organic produce at an affordable price. It is the lowest wage job out there. Automating it would make prices far out of range for the 99% - but the billionaires could care less?
But AI can't be held liable for its actions, that is one role. It has no direct access to the context it is working in, so it needs humans as a bridge. In the end AI produce outcomes in the same local context, which is for the user. So from intent to guidance to outcomes they are all user based, costs and risks too.
I find it pessimistic to take that static view on work, as if "that's it, all we needed is invented", and now we are fighting for positions like musical chairs
Daily reminder that the vast majority of value generated by productivity boost brought by technology in the last 50 years doesn't benefit the workers
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcSG4s-x...
How seriously would you take a proposal on car pollution regulation and traffic law updates written by Volkswagen?
they more or less wrote the EU emission regulations
the only reason diesel cars were sold in huge numbers in the EU
but at least a couple of these proposals seem to boil down to needing to tax the absolute crap out of the AI companies. which seems pretty obviously true, and its interesting that the ai companies are already saying that.
I don’t trust them. Their strategy is to say “don’t worry about all your jobs being taken by our technology. We (AI companies) are going to be taxed so much that you are going to be living a wealthy and fruitful life making meme photos and looking at AI porn. Don’t be concerned about how you’ll pay your bills. We’ll work it all out. Trust us.”
This is just cheap PR to launder legitimacy and urgency. To create false equivalence between AI agent and an employee.
I think this is a sign of weakness, having seen AI rolled out in many companies where it already shows signs of being absolute disaster (like summaries changing meaning and losing important details - so tasks go in wrong direction and take time to be corrected, developers creating unprecedented amount of tech debt with their vibe coded features, massive amount of content that sound important, but it is just equivalent of spam, managers spending ours with LLM "researching" strategy feeding the FOMO and so on).
I can't speak to this particular proposal or the motivations behind it, but I think my approach is the smart play in the present circumstances. Why publish something brazenly self-serving that will at best be forgotten two weeks later, or at worst be added to the list of reasons a bunch of people have to hate you, when you could instead earn some goodwill as a benevolent thought leader and maybe get some academics and politicians to come out of the woodwork backing your ideas?
If the industry is successful and a particular player doesn't fall behind the competition, they're going to be making obscene amounts of money regardless. Better to have a happy and successful public that can't imagine life without you than a public in Great-Depression-like conditions that wants you dead and will only vote for politicians who campaign on banning your product.
As an aside, I'm not sold on the idea of taxes that specifically increase the cost of AI. I don't think it's wise to disincentivize AI usage or artificially inflate costs. (That would particularly hurt anyone with use cases that aren't connected to immediate profit.) If AI has the impact most of us would like it to have, the economy will become way more productive and the public will get its share of that through corporate taxes anyway. I'd rather just close tax loopholes and start laying the groundwork for a future system of distributing resources in a post-employment world.
My current preference is a guaranteed educational/training stipend for any unemployed adult who wants one, and changing the standard career advice for the next generation from "learn to code" to "learn to startup". Looking forward a decade from now, if employment as we know it is scarce, but the economy is flush with capital and automated labor is dirt cheap, it seems to me that self-employment will reemerge as the dominant career path — and anyone who can't raise funding for their business (or acquire grants for their research) will simply need to keep leveling up until they can. Maybe eventually we'll have the resources to transition to a full UBI, but in the meantime, we'd need a transitional system that could provide for the unemployed masses without incentivizing everyone else to suddenly quit jobs that were still necessary. Just my 2c.
I agree with this sentiment in the short term for people that have coding or startup skills already. We may need to ask ourselves at some point. Why work for a company when I can use AI to create a competitor to my employer in two months?.
However, this is not a long-term solution as not everyone can be a startup. Startups fail at a huge rate and they’re gonna fail even more and more startups and more people are competing to be startups. Startups don’t pay money until they start making a profit which could be years, so it’s not a legitimate replacement for a current position. This seems like a very, very competitive low, low cost of entry race to the bottom type of market so many of the benefits may quickly disappear.
If someone runs a startup that isn't providing a livable income and they don't have savings to live off of, that startup shouldn't be their full-time job. Of course startups aren't for everyone, just as coding isn't, but there are many other forms of self-employment. Even so, I'd imagine successful startups to be far more common than today in such an environment — if not by percentage, at least by absolute numbers. A world of cheap and abundant capital with engineering and physical labor available at a fraction of the cost of human employees would be an entrepreneur's dream.
Also, AI may be more capable by the time we even get there if we ever do and AI may be a better entrepreneur than a human. Once that happens, look for the cost of AI to go sky high and access to it highly restricted and only available to the elite.
For the same reason that the tech execs do all the other terrible things they do: because they want to own e v e r y t h i n g, and know that they can't do that by acting in good faith.
They want to be the new feudal overlords, and care much less about "goodwill" than they do about making it seem inevitable that they will be the gatekeepers of all thought and labor.
The more they can convince you, the people, and the policymakers that this "AI revolution" is real, and not just a bubble, the less likely everyone is to see through their exaggerations, misdirections, and outright lies to the fact that LLMs are not, and are never going to become, AGI. They are measurably not replacing any significant number of workers. They cannot do our jobs.
As the masses fade into permanent unemployment, this will likely coincide with (and be partially caused by) a corresponding proliferation in intelligent humanoid robots.
At a certain point, "turning on them" becomes physically impossible.
Financial circularity could also lead to instability.
I hope people will eventually revisit these predictions and admit they were wrong.
Incredible stuff...
Not serious, not worth reading.
Anyone with anxieties over immigration should have those same concerns over AI, many times over.
Skilled immigrants just got a $100,000/year head tax in the US. Where is such a tax for AI?
LLM-Attention centric AI isn’t the end of AI development
So if they are successful at locking in it will be at their own demise because it doesn’t cover the infinity many pathways for AI to continue down, specifically intersections with robotics and physical manipulation, that are ultimately way more impactful on society.
Until the plurality of humans on the earth understand that human exceptionalism is no longer something to be taking for granted (and shouldn’t have been) there’s never going to be effective global governance of technology.
Could you elaborate more on this? FYI fully agreed on the former sentences.
Maybe you are alcohol, gambling, and pornography resistant but maybe you have friends and family that aren't. Are you picking up their slack?
What circumstances make "going Amish" look, not just reasonable, but necessary for survival?
The tasks humans are best at now are different than 10kya.
The world changes, new human jobs are made and humans collectively move up the abstraction chain. Schumpeter called this creative destruction and “capital + technology” is the transition function.
At the point where “capital + technology” does not need a human anymore and that will happen (if not in my lifetime then at least in the next 500 years) then there will be nothing more to argue for or retain.
So unless humanity recognizes this and decides to organize as humans (not as europeans, or alabamans or han etc…) then this is the only possible outcome.
Me personally, I don’t think that’s mathematically/energetically possible for humans to do because we’re not biologically capable of that level of eusocial coordination.
Why do you think this is the only possible outcome? Aren’t we already organized as humans? Won’t people revolt when this really hits the fan?
Revolt is a transitional process only, from one structure to another, it doesn’t change the fundamental math of the fact that humans are not eusocial
Either your definition of organized is different than mine or this is a silly conversation.
Organization means everyone is in the same singular organizational identity - eusocially
The closest formal group seems to be the European Union - but that’s still infinitely far away from what’s needed to survive
Humans either figure out how to form a biological superorganism or go extinct to non-human intelligence
This isn’t happening tomorrow, but on century time scales, it’s obviously the only likely trajectory
Not sure I would use the word organized to describe this though. It actually sounds more like hunter-gatherer society / commune / family. It does seem unlikely that this could happen on a global scale though. It’s more likely to occur in smaller groups, because without some familiarity between the people, they’re unlikely to open up with such personal activities like child rearing.
Anyway, I like your idea. Humans coming together to ensure fairness is going to be necessary. I just don’t think it’s a realistic expectation to expect this at the global scale.
What may be feasible is for people with similar occupations joining together in global labor unions for leverage against the corporations. These unions could have standards for how workers interface with corporations, especially global corporations utilizing AI and or other technologies that impact society in a potentially harmful way.
> I just don’t think it’s a realistic expectation to expect this at the global scale.
This is exactly my point. We dont have the biology for it - mammals don’t have eusocial traits because we’re too complex and egocentric, to the extent that game theoretic defections are individually risky but can have individual benefits.
A group of soldier ants can’t start their own colony because they physically cannot reproduce without a heirarchical queen because they are effectively sterile.
Dunbar number limits the possible social interactions at the depth you describe to 150-250 people at the most. That’s your tribal limit and it’s seen in extant hunter gatherer groups as you describe
While your ideas are valid about global labor cooperation, ultimately it’s stymied by the limitations of the cerebellum size, and you’re back to where you started.
Note that we already tried the hunter gatherer thing for about 250ky and it got overrun by transacional colonialism.
If you want to read my theory work on this here are some resources - note though it’s a lot of reading:
https://kemendo.com/Myth-of-Scarcity.html
https://kemendo.com/GTC.pdf
High school version: https://kemendo.com/basiccohesion.html
Full draft PDF https://kemendo.com/GTC.pdf
Without any modifications - MOOCs have single digit completion rates. This is high quality, free, publicly available educational material.
The vast majority of people do not simply have the time, money, or undivided attention - to get a new domain under their belt.
This is “help miners learn code” territory.
Not sure MOOCs can be taken as an useful alibi to measure success of upskill. Most (employers) won't honor the MOCC certs, and people do MOOC while working. Taking a MOOC doesn't inherently ensure that the learner has mastered the course they took, hence there is less incentive in completing too.
1. https://www.foreignaffairs.com/united-states/coming-ai-backl...
I’m aware that this level of nihilism is difficult for many to stomach - but it’s only nihilism if you believe in fairy tales.
If we are talking about reality then we have to deal with the impossible challenges we are facing.
The fact is that retraining will not work - nor is this the first time it’s been named as a hope up in the past 25 years. (“teaching miners how to code” comes from the last time this was a big hope in America.)
If it helps you feel better - I said unmodified MOOCs.
With some changes, you can increase MOOC completion rates - but there isn’t enough lift even after that.
I used to be a champion for education initiatives to up skill workers - from a time before MOOCs.
The failure of MOOCs was the end of that hope because it showed there was a gape between the ideal and reality.
People simply can’t retrain like that.
Is AI the proverbial apple in Adam and Eve? Are we justifying taking a bite of it just because it’s there? Are we helpless and unable to defend ourselves against it? My worry is that thoughts like these and questions like these are going through young people and their decisions about how to provide and proceed into a career. Are we headed towards learned helplessness?
Retraining is a pipe dream which will be sold for another 5 years, till most people are underemployed.
This is what happened to factory workers, and to an extent is going to happen to knowledge workers.
The real menace is hidden in the details though - knowledge work is assumed to have one core component - information. Accurate information.
In reality it has two - emotional salience and informational accuracy.
LLMs generate content- I foresee a future where people are underemployed as output verifiers. So a PhD in physics helps you QC an LLM.
The only job left is to own a firm, but even that will be closed because you will either be selling to capital owners or the majority of humanity.
The only hope of technology is that it creates a revolution which upends the preexisting incumbents. But the issue here is the under employment.
Proposal written by billionaire trying to shift taxation even more away from themselves and even more to everyone else.
> Accelerate permits and approvals for AI infrastructure
Oh, they want that? Who would not say.
The reasons for that will be proposed as protecting the citizens from the evil other country that’s building AI. “Without strong AI, we can’t build weapons to defend the country.” and “without strong AI, our companies won’t be able to compete in the world marketplace.”