Zum Inhalt der Seite gehen

Suche

Beiträge, die mit EffectiveAltruism getaggt sind


"A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.

Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.

Habryka used the word “crazy” in the non-clinical sense, to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.

“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.

Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”

A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me."

https://www.theguardian.com/global/ng-interactive/2025/mar/05/zizians-artificial-intelligence

#SiliconValley #Transhumanism #EffectiveAltruism #Rationalism #AI


#EffectiveAltruism folx should work on their reading comprehension.

I got an #AI in 2024 retrospective from 80,000 Hours, an Effective Ventures project (related to EA). In it, they mention that "the o1 language model [developed by OpenAI ][...] has the ability to deliberate about its answers before responding."

The OpenAI o1 release says: "We introduce deliberative alignment, a training paradigm that directly teaches reasoning LLMs [...] safety specifications..."

Quite the leap of faith...


Today, in central #Leipzig at 18:00 there is an #EffectiveAltruism Meetup/Kennlernabend

https://forum.effectivealtruism.org/events/bfoec68EM9fLQwrcJ/kennlernabend

I'm planning on going to see if there are strategic alliances/other cooperations that can be made.