A coming out party for Generative AI, Silicon Valley’s new craze

In Silicon Valley, crypto and the metaverse are out. Generative AI is in.

That was made clear Monday night at the San Francisco Exploratorium, where Stability AI, the startup behind the popular imaging algorithm Stable Diffusion, threw a party that felt like a return to prepandemic exuberance.

The event – which drew tech luminaries like Google co-founder Sergey Brin, AngelList founder Naval Ravikant and venture capitalist Ron Conway out of their Zoom rooms – was billed as a launch party for Stability AI and a celebration of the recent $101 The company’s million-dollar fundraising round that reportedly valued the company at $1 billion.

But it was also a coming-out bash for the entire field of generative AI — the shaky umbrella term for AI that not only analyzes existing data, but creates new text, images, videos, snippets of code, and more.

In particular, it’s been a stellar year for generative AI apps that convert text prompts into images — which, unlike NFTs or virtual reality metaverses, actually have the numbers to justify the hype they’ve received. DALL-E 2, the image generator that OpenAI released this spring, has more than 1.5 million users, creating more than two million images every day, according to the company. Midjourney, another popular AI image generator released this year, has more than three million users on its official Discord server. (Google and Meta built their own image generators but didn’t release them to the public.)

That kind of growth has caused a feeding frenzy from investors hoping to get in early on the next big thing. Jasper, a year-old AI copywriting app for marketers, recently raised $125 million at a $1.5 billion valuation. Start-ups have raised millions more apply generative AI in areas such as gaming, programming and advertising. Sequoia Capital, the venture capital firm, said in a recent blog post that it believes generative AI could create “trillions of dollars in economic value.”

But no generative AI project has garnered as much attention — or as much controversy — as Stable Diffusion.

That’s partly because, unlike the many generative AI projects that are carefully guarded by their makers, Stable Diffusion is open source and free to use, meaning anyone can view or download the code and copy a modified version onto it can run on a PC. More than 200,000 people have downloaded the code since its release in August, the company says, and millions of images have been created using tools built on top of Stable Diffusion’s algorithm.

This hands-on approach extends to the images themselves. Unlike other AI image generators, which have strict rules to prevent users from creating violent, pornographic, or copyright-infringing images, Stable Diffusion has only a basic security filter, powered by All users creating their own versions of the app can be easily disabled.

This freedom has made Stable Diffusion a hit with underground artists and meme makers. But it has also prompted widespread concern that the company’s lax rules could lead to a spate of violent imagery, non-consensual nudity, and AI-generated propaganda and misinformation.

Stable Diffusion and its open-source offshoots have already been used to create many objectionable images (including, judging from a quick scan of Twitter, a truly amazing amount of anime pornography). In the past few days, several Reddit forums have been shut down after being inundated with non-consensual nude photos, mostly created with Stable Diffusion. The company tried to contain the chaos, tell users not “creating something you’d be ashamed to show your mother” but has stopped imposing stricter filters.

Rep. Anna Eshoo, a Democrat from California, recently sent a letter to federal regulators warning that people were using stable diffusion to create graphic images of “violently beaten Asian women.” Ms Eshoo called on regulators to crack down on “unsafe” open-source AI models.

Emad Mostaque, the founder and CEO of Stability AI, has dismissed the idea of ​​content restrictions. He argues that radical freedom is necessary to realize his vision of democratized AI, detached from corporate influence.

He reiterated that view in an interview with me this week, contrasting his view with what he described as the tech giants’ dogged, paternalistic approach to AI.

“We trust the people and we trust the community,” he said, “as opposed to a centralized, unelected entity that controls the world’s most powerful technology.”

Mr. Mostaque, 39, is an oddball frontman in the generative AI industry.

He doesn’t have a Ph.D. in artificial intelligence, nor has he worked at any of the big tech companies that typically spawn AI projects, like Google or OpenAI. He is a British former hedge fund manager who has spent much of the past decade trading oil and advising companies and governments on Middle East strategy and the threat of Islamic extremism. More recently, he organized an alliance of think tanks and tech groups trying to use big data to help governments make better decisions about Covid-19.

Mr. Mostaque, who originally self-funded Stability AI, has quickly become a polarizing figure within the AI ​​community. Researchers and executives at larger and more conventional AI organizations characterize his open-source approach as either naïve or ruthless. Some worry that releasing open-source generative AI models without guard rails could trigger a backlash from regulators and the general public that could hurt the entire industry.

But on Monday night, Mr. Mostaque was greeted like a hero by a group of several hundred AI researchers, social media executives and tech Twitter personalities.

He made many veiled recordings at tech giants like Google and OpenAI, which was funded by Microsoft. He denounced targeted advertising, the core of Google and Facebook’s business models, as “manipulative technology” and said that unlike those companies, Stability AI would not build a “panopticon” that spy on its users. (That elicited a groan from Mr. Brin.)

He also earned cheers when he announced that the computer the company uses to train its AI models, which has more than 5,000 powerful graphics cards and is already one of the largest supercomputers in the world, has increased to five or ten times what it is currently size would increase within the next year. This firepower would allow the company to expand beyond AI-generated images into video, audio and other formats, and make it easier for users around the world to run their own, localized versions of its algorithms.

Contrary to some AI critics, who fear the technology could cost artists and other creative professionals their jobs, Mr. Mostaque believes that bringing generative AI into the hands of billions of people will lead to an explosion of new possibilities.

“So much of the world is creatively clogged, and we’re going to make it so they can poop rainbows,” he said.

If all of this sounds eerily familiar, that’s because Mr. Mostaque’s voice reflects the utopian dreams of an earlier generation of tech founders like Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey. These men also raced to get powerful new technology into the hands of billions, and rarely stopped to consider the harm that might result.

When I asked Mr. Mostaque if he was worried about unleashing Generative AI on the world before it was safe, he said he wasn’t. AI is advancing so rapidly, he said, that it’s safest to make it publicly available so communities — not big tech companies — can decide how to govern it.

Ultimately, he said, it is not generative AI that will become a dangerous force, but transparency rather than control from above.

“You can query the records. You can question the model. You can query Stable Diffusion’s code and the other things we do,” he said. “And we see that it is constantly being improved.”

His vision of an open-source AI utopia may seem fantastic, but on Monday night he found plenty of people wanting to make it a reality.

“You can’t put the genie back in the bottle,” said Peter Wang, an Austin-based technology executive who was in town for the party. “But at least you can let everyone look at the genie.”



source

Leave a Reply

Your email address will not be published. Required fields are marked *