- Founders and Futurists
- Posts
- ⚠️Newsletter #13 Show Me How You Work: Week of August 12, 2024
⚠️Newsletter #13 Show Me How You Work: Week of August 12, 2024
SB 1407 is looming this week, while Amazon saves a quarter billion dollars in developer time.
Table of Contents
Thank You
The Rundown
News and Research
Podcast Episodes, Deep Dives, and Projects
Thoughts
Final
Thank You
Thank you for all of the time you have spent reading this newsletter and engaging with the other content that I create. For me, it has been a privilege to engage with you, continue conversations that we may have had, and remain in touch. This has been an experiment in building my voice, and it’s been an honor to share it with you. Several of you reached out about my Deep Thoughts #2 post about foxes and hedgehogs, and I appreciate it.
I see this newsletter and content platform as a place to engage with tech, where thoughtful discussion can occur both about how technology shapes our world and how it actually works.
Please keep engaging and helping this community grow. I am planning to host some events in September and October. Stay tuned!
Josh
The Rundown
This week, instead of focusing more broadly, I wanted to deep dive into California legislators’ attempt to regulate AI. While I planned to discuss my sojourn to a Michigan tractor fair, this is more important. Believe me, I am no shill for the Major tech companies, but this is a bad policy. As a regular user of AI, I find it almost silly to think that this “stochastic parrot” will rise up against us. At best, it can only “squeak” if we choose to listen to it. Read below.
California Senate Bill 1407 (SB 1407): A Flawed Attempt to Regulate AI
California Senate Bill 1407 (SB 1407) represents a well-intentioned but ultimately flawed attempt to regulate artificial intelligence (AI) within the state. Though born out of concerns for AI safety, the bill poses significant risks to open innovation, academic research, and the broader technological ecosystem. It is poor policy due to its unrealistic demands on developers, its potential to stifle open-source innovation, its failure to effectively address the actual risks associated with AI, and its troubling implications given the precedent set by Section 230 of the Communications Decency Act.
Unrealistic Demands on AI Developers
One of the most problematic aspects of SB 1407 is its requirement for AI developers to implement "administrative, technical, and physical" safeguards to prevent misuse of their models. While this might sound reasonable in theory, it is fundamentally incompatible with the principles of open-source development. Open models, by their nature, are designed to be accessible for independent experimentation, modification, and use by a wide range of developers and researchers. However, SB 1407 demands a level of control over these models that open-source developers simply cannot maintain once their models are released into the public domain.
For instance, the bill requires developers to ensure that their models cannot be misused to cause significant economic harm, even if those models are modified by third parties. This is an impossible standard to meet. AI models, like any other digital technology, can be adapted for a wide range of uses—both beneficial and harmful—by those with the necessary expertise. Expecting developers to foresee and prevent every potential misuse of their models is akin to asking a paper manufacturer to monitor and control what people choose to write or print. Such a burden would likely deter many developers from releasing open-source models altogether, thereby stifling innovation.
The Importance of Open-Source AI Development
Open-source AI development is crucial for the advancement of technology. It allows for a diverse range of developers, from startups to academic researchers, to contribute to and benefit from cutting-edge AI models. However, SB 1407 threatens this collaborative environment by imposing stringent requirements incompatible with the open-source ethos.
The bill's provisions, such as requiring a "kill switch" to shut down AI models, are particularly concerning. The idea that a developer could be forced to deactivate a model at a time undermines open-source software's very principles. If developers fear that the tools they create and share could be abruptly rendered unusable, they will be far less inclined to innovate or collaborate. This would chill the AI community, particularly in academia, where open-source models are often the foundation for research and education.
Moreover, SB 1407's broad scope, which includes the original models and their derivatives, exacerbates the problem. By extending these regulations to modified model versions, the bill discourages the iterative improvements and customizations that are the hallmark of open-source development. This could significantly slow the pace of AI advancement, as developers might avoid using open-source models for fear of running afoul of the law.
Misplaced Focus on Hypothetical AI Risks
While SB 1407 aims to mitigate the risks associated with AI, it fails to target the most pressing concerns. The bill focuses on hypothetical scenarios where AI models are used to cause catastrophic harm, such as creating weapons of mass destruction. However, these scenarios are far less likely than the more immediate and tangible threats posed by malicious AI users.
The real danger lies not in the AI models themselves, but in how they are used. Bad actors can employ AI to enhance traditional cyber threats like spear phishing, deepfakes, and misinformation campaigns. These issues are not new, and the AI models do not fundamentally change the nature of these threats—they simply make them more efficient. Instead of placing the burden on developers to prevent every possible misuse of their models, policymakers should focus on improving AI security by targeting the malicious actors who exploit these technologies.
Philosophical Divide: The "Guns Don't Kill People" Argument
While liberals may see this as similar to the "guns don't kill people, people kill people" argument often used by conservatives in the gun control debate, the comparison highlights a key philosophical divide. Just as gun control advocates argue that regulating access to firearms can prevent violence, proponents of SB 1407 believe that stricter controls on AI models can prevent misuse. However, this analogy also underscores the limitations of such an approach. Just as banning or heavily regulating guns does not eliminate the potential for violence, imposing stringent regulations on AI models does not prevent harmful uses—especially when the real issue lies with the individuals who misuse the technology, not the technology itself.
Arbitrary Thresholds and Their Consequences
Furthermore, the bill's arbitrary thresholds for regulation—such as the amount of computing power used or the cost to train a model—are misguided. These criteria do not necessarily correlate with the potential for harm, and they could inadvertently hinder innovation across various sectors. For example, as AI technology continues to advance, academic models that currently fall below these thresholds may soon exceed them, bringing academia under the bill's restrictive regulations. This would be detrimental to AI research, particularly in public institutions that rely on open-source models for their work.
Section 230 and the Burden on AI Developers
Finally, we have fundamentally treated internet providers as independent from the users. Section 230 has been a cornerstone of internet law, providing immunity to online platforms from liability for content posted by users. This legal protection has been crucial for the growth of the internet and for enabling platforms to host user-generated content without fear of constant litigation. However, SB 1407 could be seen as an attempt to impose a similar burden on AI developers, making them responsible for the actions taken by others using their models.
The parallels to the Section 230 debate are clear: just as some policymakers seek to hold social media platforms accountable for user content, SB 1407 seeks to hold AI developers responsible for how others use their models. This approach could have severe consequences for innovation. If developers are held liable for all possible misuse of their AI models, they may be forced to limit access to them or avoid releasing them altogether. Model makers should be able to put in guardrails, but as we debated in Section 230, they should not be liable for others' misuse.
A Call for Refined AI Oversight
California's SB 1407 is a deeply flawed piece of legislation. Its unrealistic demands on AI developers, stifling effect on open-source innovation, misplaced focus on AI risks, and troubling implications for the Section 230 debate make it a poor policy choice. Rather than enacting broad and restrictive regulations, California should work to refine its approach to AI oversight.
News and Research
Amazon Q took the average time to upgrade an app from 50 days to just a few hours, saving 4,500 developer years of work; in less than 6 months Amazon Q provided an estimated $260M in annualized efficiency gains! Read in Yahoo.
Fireworks shares their approach to quantization and tradeoffs with Llama 3.1. Read in Fireworks AI.
Ex-Google CEO Eric Schmidt makes controversial comments on AI startups stating something we already figured executives thought. Read in The Verge.
Telegram CEO Pavel Durov arrested near Paris for lack of platform moderation. I am watching this story, because Durov is no friend of Putin. Read in The Verge.
The collapse of major crypto-friendly banks leads to rise of crypto shadow bankers. Read in The Information.
Possible cyberattack hits Port of Seattle and Sea-Tac Airport affecting systems. Read in TechCrunch.
Bolt's aggressive fundraising deal with implications for investors and founder succession. This is definitely worth a ready. Read in TechCrunch.
Meta Platforms CEO Mark Zuckerberg faces challenges with lackluster downloads for Llama 3.1. Read in The Information.
Podcast Episodes, Deep Dives and Projects
Please read my deep thoughts from last week about risk and venture capital.
New content coming down the pike soon!
Final Thoughts
I have some exciting opportunities on the horizon! Please contact my email, [email protected], since I have more time to speak with people and help where I can. I am still actively angel investing.