Resisting
Artificial
Intelligence
Welcome to Resiting Artificial Intlelligence, a website
operated by Kyle Heger of Albany, California as a resource for people who want to fight back against the dangers of artificial intelligence.
Below is information about petitions, books, news releases and organizations relevant to this issue.
To comment, ask a question or contibute content, reach Kyle at resistingartificialintelligence@yahoo.com
Updated March 26, 2026
Making AI a 2026 Election Issue
Try to make AI an election issue this year, even during the primaries.
Below is an example of a letter I sent to each of the candidates running for governor in the California primary election. I posted brief items on candidates' X and Bluesky social media sites as well, asking them their positions on artificial superintelligence. I missed the chance to raise the topic at earlier debates involving these candidates. But I hope to raise it if they have more debates or participte in townhall meetings.
Once the primaries are over, I intend to raise the issue with candidates in California and in key races across the county.
Example of letter:
As a California voter, I’m writing to ask: How much will you, if elected governor, make reining in the threat of artificial superintelligence an immediate priority?
This should be a top priority. Many other pressing economic, social-justice, peace-and-war and environmental-protection problems press upon us, but we can’t afford to wait for them to be solved before taking action on this problem.
In January, The Bulletin of Atomic Scientists cited AI as a factor that has moved the hands of its doomsday clock the closest they’ve ever been to midnight –85 seconds away.*1 Of all the risks posed by AI, the most dangerous is the imminent development of artificial superintelligence, which threatens all life on earth. Last year, the book “If Anybody Builds It, Everybody Dies” by Eliezer Yudkowsky and Nate Soares of the nonprofit Machine Intelligence Research Group (MIRI) was released, warning: “If any company or group … builds an artificial superintelligence using anything remotely like current techniques … then everyone, everywhere on Earth, will die.” This threat, although it stretches the bounds of imagination, is real, serious and immediate. Since then, thousands of people, including some who work at Open AI, Anthropic, Google and Meta, have signed an open letter circulated by The Future of Life Institute, saying, “We call for a prohibition on the development of superintelligence, not lifted before there is 1. broad scientific consensus that it will be done safely and controllably, and 2. strong public buy-in.” *2 According to a survey made last year by the same organization, 64 percent of U.S. adults believe superhuman AI shouldn’t be made until proven safe or controllable, or should never be made. *3
One important step to take in this state to reduce this threat is to require an “off switch” on advanced artificial intelligence developed and employed in California. Below is a description of one such “off switch” proposed by MIRI.
Please make the mandatory use of such off switches a priority in your campaign, and, if elected, make it a priority as governor.
News Release
Jan. 20, 2026
Father of Three Sends Family Photo to AI CEOS, Showing Them “the Faces of People They’re Endangering”
A father of three in Albany, CA, Kyle Heger, sent personal letters to Elon Musk and 23 other CEOS on Jan. 20, asking them to stop making artificial intelligence (AI) more powerful. He says he did it “on behalf of myself, my wife, my sons and all humanity.”
The companies include Alphabet, Amazon, Apple, Meta, Microsoft and companies headquartered in Canada, China, The Czech Republic, India, The Netherlands, Sweden and Switzerland.*
Heger says he sent the letters because he has reason to fear that if AI systems keep growing more powerful, they will soon seriously threaten the survival of humans and maybe all life on earth. In the letters, he quotes various AI experts. For example, he quotes Geoffrey Hinton, “Godfather of modern AI,” as saying of advanced AI systems: “The alarm bell I’m ringing has to do with the existential threat of them taking control [...] If you take the existential risk seriously, as I now do, it might be quite sensible to just stop developing these things any further.”
The letter also refers to a 2023 survey by AI Impacts, which says that between 37.8% and 51.4% of AI researchers give at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
Heger explains, “I sent a photograph of myself, my wife and our three sons to show CEOs the faces of people they’re endangering. We’re not just statistics.”
He emphasizes that he sent the letter on his own, not as part of any organization.
Heger, 66, now retired, was formerly a medical coder, magazine editor and nonprofit administrator. He admits he’s not an expert on AI. But, he says, “I’m not an expert on trains either, but I know enough to jump out of the way if one’s barreling toward me. Unfortunately, we can’t just jump out of AI’s way. We need the people who are developing it to take immediate action to keep us safe.”
The letter was sent by U.S. post. “One of the biggest of the Digital Age’s many problems is the lack of direct human interaction,” Heger says. “Using U.S. mail is my way of trying to reach these guys with as few digital mediations as possible. I realize the chances they’ll actually read the letters are remote. But who knows? Maybe the novelty of getting a message by post will make someone at a corporation pay a little more attention than otherwise. Anyway, we can’t afford to just sit back and do nothing.”
For a copy of the letter, the photo or other information, please contact Kyle Heger at resistingartificialintelligence@yahoo.com
*Full list of companies receiving Heger’s letter: Alibaba Cloud, Alphabet (Google), Amazon, Anthropic, Apple, Deep Cogito, Deepinvent, Eon Systems, GoodAI, Harmonics, Inc., Integral AI, Keotic, Lila Sciences, Inc., Meta (Facebook), Microsoft, Nnaisense, Olbrain, Open AI, Robust AI, Safe Superintelligence Inc., SingularityNET, Superintelligence Computing Systems, Inc., Tesla, Vicarious Surgical, xAI.
Example of Letter to CEOS
Elon Musk
CEO
Tesla
1 Tesla Road
Austin, TX 78725
January 17, 2026
Dear Mr. Musk,
I’m writing on behalf of my sons, my wife, myself and all humanity to plead with you to stop developing more powerful artificial intelligence, whether it’s at the level of “artificial general intelligence” or “high-level machine intelligence” or “artificial superintelligence.”
On one hand, humanity risks little by not developing such technology. There are plenty of good works we can do using types of AI we have now and other means at our disposal. AI is not essential to life. Life was worth living before anyone even dreamed of such technology. It will still be worth living if all AI disappeared today.
On the other hand, developing more powerful AI presents an unacceptably high risk to life.
On “Tucker Carlson Tonight,” you said AI “has the potential of civilizational destruction.”
In a 2023 survey of AI researchers by AI Impacts, between 37.8% and 51.4% of respondents gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
132,879 of us, including AI experts, have signed a petition circulated by Future of Life Institute that says, “We call for a prohibition on the development of superintelligence, not lifted before there is 1) broad scientific consensus that it will be done safely and controllably, and 2) strong public buy-in.”
According to a 2025 survey conducted for Future of Life Institute, 64 percent of U.S. adults believe superhuman AI shouldn’t be made until proven safe or controllable or should never be made.
Admittedly, I’m not a technical expert on AI. But I don’t need to be an expert to be one of its victims. I’m not a technical expert on trains either, but I know enough to get off the tracks if one is barreling toward me.
Surely, whatever benefits you foresee coming to yourself, your company or the world by increasing AI intelligence don’t outweigh your responsibility to avoid releasing a catastrophically powerful genie that cannot be rebottled. Surely, any confidence you have that by developing superintelligence before other companies or countries do, you can significantly make it “better” or “safer” is misplaced because the technology itself is so uncontrollable. Surely, any motivation you have to participate in the superintelligence race because the technology is inevitable is to perpetuate a “race to the bottom.”
Our lives are in your hands. Please don’t gamble them away. Please come down on the side of life by pledging publicly now to stop developing more powerful AI.
Sincerely,
Kyle Heger
P.S. I’ve included a photo of myself, my wife and our three sons, taken soon after the youngest of them was born, so you can see the faces of some of the many people who are counting on you to make a wise choice when it comes to their future. We are not products of virtual reality or artificial intelligence. We are living, breathing creatures who want to continue living, breathing, loving, growing and making decisions for ourselves.
Petitions
Content below updated March 26, 2026
STOP AI RACE (Petition to U.S. Legislators)
sponsored by Pause AI U.S. URL is below.
https://mstr.app/1870b6d6-c0b5-4f0c-b9d8-7dc987910b18
STOP AI-POWERED WARRANTLESS MASS SURVEILLANCE (Letter to U.S. Legislators)
sponsored by #TeslaTakedown. URL is below.
https://actionnetwork.org/letters/stop-ai-powered-warrantless-mass-surveillance?source=direct_link&&link_id=0&can_id=999422ed4d0c42fdd8adc7aa0474630c&email_referrer=email_3160974&email_subject=is-elon-helping-trump-spy-on-you&
KEEP OUR FUTURE SAFE; SUPPORT THE AI RISK EVALUATION ACT (U.S. LEGISLATURE)
Below is the URL for this petition sponsored by PauseAI U.S.
https://mstr.app/20dcd7a0-d5e3-40ef-898a-51a3a9dbc385
RESPONSIBLE AI ACT (U.S. LEGISLATURE)
Below is the URL for this petition sponsored by PauseAI U.S.
https://mstr.app/660ca530-613e-4b7d-8f29-fe7ba03fef95
CALL FOR THE U.S. TO LEAD GLOBAL NEGOTIATIONS ON AI SAFETY
Below is the URL for this petition sponsored by PauseAI U.S.
https://mstr.app/bc22a737-04a3-47ae-bc71-96b2f36de4d7
CHATGPT BOYCOTT
Here is contact information to sign up for a boycott of ChapGPT. It's not exactly a petition, but it comes close.
https://quitgpt.org/
PETITION TO U.S. CONGRESS TO PREVENT MILITARY USE OF AI
Here's the website URL:
https://act.boldprogressives.org/survey/petition_2026-no-ai-controlled-war/?t=17&akid=92085%2E7590261%2EUhLdTG
AI IMPACT SUMMIT PETITION (SIGN BY FEB. 16, 2026)
Here is the website for a petition to be delivered to an international gathering for the AI Impact Summit in India. The petition is sponsored by the nonprofit Pause AI.
https://pauseai.info/india-summit-2026
AI SUPERINTELLIGENCE
Here is the website for a petition about AI superintelligence from The Future of Life Institute:
https://superintelligence-statement.org/
The petition says, "We call for a prohibition on the development of superintelligence, not lifted before there is 1) broad scientific consensus that it will be done safely and controllably, and 2) strong public buy-in.
AI TREATY
Here is the website for the organization circulating a petition calling for an AI treaty:
Petition begins: “We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.”
Organizations that
have information
on the dangers of AI
Updated January 25, 2026
Access Now
Its website says, “defends and extends the digital rights of people and communities at risk. By combining direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon, we fight for human rights in the digital age.”
AI Futures Project
Its website says, “The AI Futures Project is a small research group forecasting the future of AI, funded by charitable donations and grants.”
AI Impacts
https://aiimpacts.org/#gsc.tab=0
Its website says, “This project aims to improve our understanding of the likely impacts of human-level artificial intelligence.
“The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.
The focus is particularly on the long-term impacts of sophisticated artificial intelligence. Although human-level AI may be far in the future, there are a number of important questions which we can try to address today and may have implications for contemporary decisions.”
Artificial Intelligence Policy Institute (Aipi)
Its website says, “American voters are worried about risks from AI technology. The AI Policy Institute’s mission is to channel public concern into effective governance. We engage with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently.”
AI Treaty
Circulates petition which begins: “We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.”
Americans for Responsible Innovation
At Americans for Responsible Innovation (ARI), we believe that AI and other emerging technologies could have a transformational impact on society – for better or worse. That’s why we’re advocating for the U.S. government to take a thoughtful and proactive approach to AI governance that protects the public while maintaining our country’s competitive edge.
Berkeley Existential Risk Initiative (BERI)
Its website says, “BERI is an independent 501(c)(3) public charity. Our mission is to improve human civilization’s long-term prospects for survival and flourishing.
“What We Do
“Our primary focus is collaborating with university research groups working to reduce existential risk (“x-risk”), by providing them with services and support. Our goal is to make operations faster and more flexible for these groups by unblocking tasks and projects and allowing them to accomplish things that are difficult or impossib
Center for AI Safety (CAIS)
https://safe.ai/
Circulates petition which says, in part, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Center for Democracy and Technology (CTD)
https://cdt.org/
Its website says, “The Center for Democracy & Technology (CDT) is the leading nonpartisan, nonprofit organization fighting to advance civil rights and civil liberties in the digital age.
We shape technology policy, governance, and design with a focus on equity and democratic values. Established in 1994, CDT has been a trusted advocate for digital rights since the earliest days of the internet.”
The Centre for the Governance of AI
Its website says, “GovAI is an independent research organization dedicated to helping humanity navigate the transition to a world with advanced AI …. It conducts interdisciplinary research on AI governance issues, drawing from fields such as political science, computer science, economics, law, and philosophy. GovAI's work focuses on understanding and addressing the potential risks and benefits of artificial intelligence, particularly as AI systems become more advanced and influential in society.”
Center for Human-Compatible Artificial Intelligence (CHAI)
Its website says, “CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.”
Center for Responsible Innovation
https://ari.us/centerforresponsibleinnovation/
Website says, “Center for Responsible Innovation (CRI) is a 501(c)(3) organization that aims to improve the quality of the AI policy conversation. CRI focuses on promoting responsible innovation, developing actionable and politically feasible policy ideas, and educating policymakers on AI.”
Control AI
Its website says, “ControlAI is a non-profit organization that works to reduce the risks to humanity from artificial intelligence.
“We develop policy and legislation; secure media coverage; produce high-quality videos and infographics; design and run effective digital and physical campaigns; organize events; and influence policymakers. We have secured public support for our campaigns from high-ranking politicians, have authored draft bills for the UK and US, have created multiple viral videos, and have led international coalitions.
In the UK, we operate as a nonprofit (a “private company limited by guarantee”), and in the US we operate as a nonprofit 501(c)(4).”
Design it for Us
Its website says, “We're building something new and necessary: a first-of-its-kind movement to design an online world where we can all thrive.
Design It For Us was founded in March 2023 to ensure that youth voices are at the center of the responsible technology policymaking process. The coalition was launched by just a handful of young people and has now grown to hundreds of activists from across the country and around the world.
Design It For Us has secured key victories in advancing online safety and privacy policies at the state and federal level in the U.S. that better the lives of young people on and offline, and uses collective youth power to hold Big Tech accountable.”
Electronic Frontier Foudnation (EFF)
Its website says, “The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.”
Evitable
https://evitable.com/
Its website says, “We are a new organization focused on informing and organizing the public around societal-scale risks and harms of AI.
“Founded by AI professor David Krueger, Evitable provides the public with uncompromising, independent perspectives on AI. David has been an influential advocate on AI risk within the AI research community for over a decade. In 2023, he initiated the CAIS Statement on AI Risk, alerting the world to the growing expert concern that AI might lead to human extinction. As companies openly pursue Superintelligent AI that could replace humanity, public opposition is more critical — and powerful — than ever.”
Existential Risk Observatory
https://www.existentialriskobservatory.org/
Its website says, “Existential risk has increased from almost zero to an estimated likelihood of one in six in the next hundred years, according to research from Oxford’s Future of Humanity Institute. We think this likelihood is unacceptably high.
“We also believe that the first step towards decreasing existential risk is awareness. Therefore, the Existential Risk Observatory is committed to reducing existential risk by informing the public debate.”
Fairplay for Kids
Its website says, “For over 25 years, Fairplay has been the leading voice fighting to enhance children’s well-being by eliminating the exploitative and harmful business practices of marketers and Big Tech. Join us to create a world where kids can be kids!”
Fight for the Future
https://www.fightforthefuture.org/
Its website says, “We are a group of artists, engineers, activists, and technologists who have been behind the largest online protests in human history, channeling Internet outrage into political power to win public interest victories previously thought to be impossible. We fight for a future where technology is a force for liberation— not oppression.”
Future of Life Institute
https://futureoflife.org/
Its website says, “The Future of Life Institute’s mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”
Future Society
Its website says, “At The Future Society, we have always looked ahead. We prepare decision-makers for the geopolitical and security disruptions that AI brings.
“What motivates us? Our vision is a world where humanity is protected from the gravest risks of powerful AI, as governance prevents catastrophic harms and conflicts, and fosters global security, sovereignty, and human progress.”
“This led to The Future Society’s establishment as a U.S.-registered 501(c)3 in 2016 driven by the idea that AI governance is fundamental for enabling an AI-powered era.
“How We Work
“The Future Society advises American, European, and other influential decision-makers worldwide. We draw on our analyses and strong networks to provide them with pragmatic guidance. We help governments act for security early, coordinate amidst shifting power dynamics, and navigate technological, catastrophic, and political risks from powerful AI."
Institute for AI Policy and Strategy (IAPS)
Its website says, “Securing a positive future in a world with powerful AI … The Institute for AI Policy and Strategy (IAPS) is a nonpartisan think tank that produces policy research to address the implications of AI, from today’s most advanced models to potential AGI and superintelligence. Our work equips policymakers and industry leaders to protect innovation while navigating high-magnitude risks and opportunities at the intersection of AI, national security, and geopolitics.”
Machine Intelligence Research Institute (MIRI)
Its website says, “The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California. We do research and public outreach intended to help prevent human extinction from the development of artificial superintelligence (ASI).”
Public First Action
https://publicfirstaction.us/contact
Website says, “Public First Action is a bipartisan organization designed to educate Americans on key AI issues and advance an AI policy agenda supporting safeguards.”
Pause AI
Its websites says, “We were founded in Utrecht, Netherlands in May 2023 by Joep Meindertsma, who put his job on hold because he couldn't ignore the existential risks from artificial intelligence any longer. We began with our first public action, which was a protest outside Microsoft's Brussels lobbying office.
“What started as one person's call to action has grown into a global grassroots movement with volunteers, national chapters, and local communities across the world, all working toward the same goal: pausing frontier AI development until we can prove it's safe and keep it under democratic control.”
Stop AI
Its website says, “We are non-violent activists working to permanently ban the development of Artificial Superintelligence (ASI) to prevent human extinction, mass job loss, and many other problems.”
The Most Important Book about AI
https://ifanyonebuildsit.com/?ref=iojune
IF ANYONE BUILDS IT, EVERYONE DIES
Eliezer Yudkowsky & Nate Soares
Superhuman AI threatens human extinction. But it's not too late to change course.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
Posted January 24, 2026
Example of Letter to CEOS
Elon Musk
CEO
Tesla
1 Tesla Road
Austin, TX 78725
January 17, 2026
Dear Mr. Musk,
I’m writing on behalf of my sons, my wife, myself and all humanity to plead with you to stop developing more powerful artificial intelligence, whether it’s at the level of “artificial general intelligence” or “high-level machine intelligence” or “artificial superintelligence.”
On one hand, humanity risks little by not developing such technology. There are plenty of good works we can do using types of AI we have now and other means at our disposal. AI is not essential to life. Life was worth living before anyone even dreamed of such technology. It will still be worth living if all AI disappeared today.
On the other hand, developing more powerful AI presents an unacceptably high risk to life.
On “Tucker Carlson Tonight,” you said AI “has the potential of civilizational destruction.”
In a 2023 survey of AI researchers by AI Impacts, between 37.8% and 51.4% of respondents gave at least a 10% chance of advanced AI leading to outcomes as bad as human extinction.
132,879 of us, including AI experts, have signed a petition circulated by Future of Life Institute that says, “We call for a prohibition on the development of superintelligence, not lifted before there is 1) broad scientific consensus that it will be done safely and controllably, and 2) strong public buy-in.”
According to a 2025 survey conducted for Future of Life Institute, 64 percent of U.S. adults believe superhuman AI shouldn’t be made until proven safe or controllable or should never be made.
Admittedly, I’m not a technical expert on AI. But I don’t need to be an expert to be one of its victims. I’m not a technical expert on trains either, but I know enough to get off the tracks if one is barreling toward me.
Surely, whatever benefits you foresee coming to yourself, your company or the world by increasing AI intelligence don’t outweigh your responsibility to avoid releasing a catastrophically powerful genie that cannot be rebottled. Surely, any confidence you have that by developing superintelligence before other companies or countries do, you can significantly make it “better” or “safer” is misplaced because the technology itself is so uncontrollable. Surely, any motivation you have to participate in the superintelligence race because the technology is inevitable is to perpetuate a “race to the bottom.”
Our lives are in your hands. Please don’t gamble them away. Please come down on the side of life by pledging publicly now to stop developing more powerful AI.
Sincerely,
Kyle Heger
P.S. I’ve included a photo of myself, my wife and our three sons, taken soon after the youngest of them was born, so you can see the faces of some of the many people who are counting on you to make a wise choice when it comes to their future. We are not products of virtual reality or artificial intelligence. We are living, breathing creatures who want to continue living, breathing, loving, growing and making decisions for ourselves.
Petitions
AI SUPERINTELLIGENCE
Here is the website for a petition about AI superintelligence from The Future of Life Institute:
https://superintelligence-statement.org/
The petition says, "We call for a prohibition on the development of superintelligence, not lifted before there is
-
broad scientific consensus that it will be done safely and controllably, and
-
strong public buy-in."
Organizations that
have information
on the dangers of AI
Updated January 25, 2026
Access Now
Its website says, “defends and extends the digital rights of people and communities at risk. By combining direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon, we fight for human rights in the digital age.”
AI Futures Project
Its website says, “The AI Futures Project is a small research group forecasting the future of AI, funded by charitable donations and grants.”
AI Impacts
https://aiimpacts.org/#gsc.tab=0
Its website says, “This project aims to improve our understanding of the likely impacts of human-level artificial intelligence.
“The intended audience includes researchers doing work related to artificial intelligence, philanthropists involved in funding research related to artificial intelligence, and policy-makers whose decisions may be influenced by their expectations about artificial intelligence.
The focus is particularly on the long-term impacts of sophisticated artificial intelligence. Although human-level AI may be far in the future, there are a number of important questions which we can try to address today and may have implications for contemporary decisions.”
Artificial Intelligence Policy Institute (Aipi)
Its website says, “American voters are worried about risks from AI technology. The AI Policy Institute’s mission is to channel public concern into effective governance. We engage with policymakers, media, and the public to shape a future where AI is developed responsibly and transparently.”
AI Treaty
Circulates petition which begins: “We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.”
Americans for Responsible Innovation
At Americans for Responsible Innovation (ARI), we believe that AI and other emerging technologies could have a transformational impact on society – for better or worse. That’s why we’re advocating for the U.S. government to take a thoughtful and proactive approach to AI governance that protects the public while maintaining our country’s competitive edge.
Berkeley Existential Risk Initiative (BERI)
Its website says, “BERI is an independent 501(c)(3) public charity. Our mission is to improve human civilization’s long-term prospects for survival and flourishing.
“What We Do
“Our primary focus is collaborating with university research groups working to reduce existential risk (“x-risk”), by providing them with services and support. Our goal is to make operations faster and more flexible for these groups by unblocking tasks and projects and allowing them to accomplish things that are difficult or impossib
Center for AI Safety (CAIS)
https://safe.ai/
Circulates petition which says, in part, “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
Center for Democracy and Technology (CTD)
https://cdt.org/
Its website says, “The Center for Democracy & Technology (CDT) is the leading nonpartisan, nonprofit organization fighting to advance civil rights and civil liberties in the digital age.
We shape technology policy, governance, and design with a focus on equity and democratic values. Established in 1994, CDT has been a trusted advocate for digital rights since the earliest days of the internet.”
The Centre for the Governance of AI
Its website says, “GovAI is an independent research organization dedicated to helping humanity navigate the transition to a world with advanced AI …. It conducts interdisciplinary research on AI governance issues, drawing from fields such as political science, computer science, economics, law, and philosophy. GovAI's work focuses on understanding and addressing the potential risks and benefits of artificial intelligence, particularly as AI systems become more advanced and influential in society.”
Center for Human-Compatible Artificial Intelligence (CHAI)
Its website says, “CHAI’s mission is to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.”
Center for Responsible Innovation
https://ari.us/centerforresponsibleinnovation/
Website says, “Center for Responsible Innovation (CRI) is a 501(c)(3) organization that aims to improve the quality of the AI policy conversation. CRI focuses on promoting responsible innovation, developing actionable and politically feasible policy ideas, and educating policymakers on AI.”
Control AI
Its website says, “ControlAI is a non-profit organization that works to reduce the risks to humanity from artificial intelligence.
“We develop policy and legislation; secure media coverage; produce high-quality videos and infographics; design and run effective digital and physical campaigns; organize events; and influence policymakers. We have secured public support for our campaigns from high-ranking politicians, have authored draft bills for the UK and US, have created multiple viral videos, and have led international coalitions.
In the UK, we operate as a nonprofit (a “private company limited by guarantee”), and in the US we operate as a nonprofit 501(c)(4).”
Design it for Us
Its website says, “We're building something new and necessary: a first-of-its-kind movement to design an online world where we can all thrive.
Design It For Us was founded in March 2023 to ensure that youth voices are at the center of the responsible technology policymaking process. The coalition was launched by just a handful of young people and has now grown to hundreds of activists from across the country and around the world.
Design It For Us has secured key victories in advancing online safety and privacy policies at the state and federal level in the U.S. that better the lives of young people on and offline, and uses collective youth power to hold Big Tech accountable.”
Electronic Frontier Foudnation (EFF)
Its website says, “The Electronic Frontier Foundation is the leading nonprofit organization defending civil liberties in the digital world. Founded in 1990, EFF champions user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development. EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.”
Evitable
https://evitable.com/
Its website says, “We are a new organization focused on informing and organizing the public around societal-scale risks and harms of AI.
“Founded by AI professor David Krueger, Evitable provides the public with uncompromising, independent perspectives on AI. David has been an influential advocate on AI risk within the AI research community for over a decade. In 2023, he initiated the CAIS Statement on AI Risk, alerting the world to the growing expert concern that AI might lead to human extinction. As companies openly pursue Superintelligent AI that could replace humanity, public opposition is more critical — and powerful — than ever.”
Existential Risk Observatory
https://www.existentialriskobservatory.org/
Its website says, “Existential risk has increased from almost zero to an estimated likelihood of one in six in the next hundred years, according to research from Oxford’s Future of Humanity Institute. We think this likelihood is unacceptably high.
“We also believe that the first step towards decreasing existential risk is awareness. Therefore, the Existential Risk Observatory is committed to reducing existential risk by informing the public debate.”
Fairplay for Kids
Its website says, “For over 25 years, Fairplay has been the leading voice fighting to enhance children’s well-being by eliminating the exploitative and harmful business practices of marketers and Big Tech. Join us to create a world where kids can be kids!”
Fight for the Future
https://www.fightforthefuture.org/
Its website says, “We are a group of artists, engineers, activists, and technologists who have been behind the largest online protests in human history, channeling Internet outrage into political power to win public interest victories previously thought to be impossible. We fight for a future where technology is a force for liberation— not oppression.”
Future of Life Institute
https://futureoflife.org/
Its website says, “The Future of Life Institute’s mission is to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.”
Future Society
Its website says, “At The Future Society, we have always looked ahead. We prepare decision-makers for the geopolitical and security disruptions that AI brings.
“What motivates us? Our vision is a world where humanity is protected from the gravest risks of powerful AI, as governance prevents catastrophic harms and conflicts, and fosters global security, sovereignty, and human progress.”
“This led to The Future Society’s establishment as a U.S.-registered 501(c)3 in 2016 driven by the idea that AI governance is fundamental for enabling an AI-powered era.
“How We Work
“The Future Society advises American, European, and other influential decision-makers worldwide. We draw on our analyses and strong networks to provide them with pragmatic guidance. We help governments act for security early, coordinate amidst shifting power dynamics, and navigate technological, catastrophic, and political risks from powerful AI."
Institute for AI Policy and Strategy (IAPS)
Its website says, “Securing a positive future in a world with powerful AI … The Institute for AI Policy and Strategy (IAPS) is a nonpartisan think tank that produces policy research to address the implications of AI, from today’s most advanced models to potential AGI and superintelligence. Our work equips policymakers and industry leaders to protect innovation while navigating high-magnitude risks and opportunities at the intersection of AI, national security, and geopolitics.”
Machine Intelligence Research Institute (MIRI)
Its website says, “The Machine Intelligence Research Institute (MIRI) is a 501(c)(3) nonprofit based in Berkeley, California. We do research and public outreach intended to help prevent human extinction from the development of artificial superintelligence (ASI).”
Public First Action
https://publicfirstaction.us/contact
Website says, “Public First Action is a bipartisan organization designed to educate Americans on key AI issues and advance an AI policy agenda supporting safeguards.”
Pause AI
Its websites says, “We were founded in Utrecht, Netherlands in May 2023 by Joep Meindertsma, who put his job on hold because he couldn't ignore the existential risks from artificial intelligence any longer. We began with our first public action, which was a protest outside Microsoft's Brussels lobbying office.
“What started as one person's call to action has grown into a global grassroots movement with volunteers, national chapters, and local communities across the world, all working toward the same goal: pausing frontier AI development until we can prove it's safe and keep it under democratic control.”
Stop AI
Its website says, “We are non-violent activists working to permanently ban the development of Artificial Superintelligence (ASI) to prevent human extinction, mass job loss, and many other problems.”
The Most Important Book about AI
https://ifanyonebuildsit.com/?ref=iojune
IF ANYONE BUILDS IT, EVERYONE DIES
Eliezer Yudkowsky & Nate Soares
Superhuman AI threatens human extinction. But it's not too late to change course.
In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.
For decades, two signatories of that letter — Eliezer Yudkowsky and Nate Soares — have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us — and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.
How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.
The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.
Posted January 24, 2026