Hugging Face cofounder Thomas Wolf says open-source AI’s benefits far outweigh its risks



In this edition…a Hugging Face cofounder on the importance of open source…a Nobel Prize for Geoff Hinton and John Hopfield…a movie model from Meta…a Trump ‘Manhattan Project’ for AI?

Hello, and welcome to Eye on AI.

Yesterday, I had the privilege of moderating a fireside chat with Thomas Wolf, the cofounder and chief scientific officer at Hugging Face, at the CogX Global Leadership Summit at the Royal Albert Hall in London.

Hugging Face, of course, is the world’s leading repository for open-source AI models—the GitHub of AI, if you will. Founded in 2016 (in New York, as Wolf reminded me on stage when I erroneously said the company was founded in Paris), the company was valued at $4.5 billion in its latest $235 million venture capital funding round in August 2023.

It was fascinating to listen to Wolf speak about what he sees as the vital importance of both open-source AI models and making sure AI is ultimately a successful, impactful technology. Here were some key insights from our conversation.

Smaller is better

Wolf argued that it was the open-source community that was leading the way in the effort to produce smaller AI models that perform as well as larger ones. He noted that Meta’s newly released Llama 3.2 family of models includes two small models—at 1 billion and 3 billion parameters, compared to tens of billions or even hundreds of billions—that perform as well as much larger models on many text-based tasks, including summarization, as much larger models.

Smaller models, in turn, Wolf argued would be essential for two reasons. One, they would let people run AI directly on smartphones, tablets, and maybe eventually other devices, without having to transmit data to the cloud. That was better for privacy and data security. And it would enable people to enjoy the benefits of AI even if they didn’t have a constant, high-speed broadband connection.

More importantly, smaller models use less energy than large models running in data centers. That’s important to combating AI’s growing carbon footprint and water usage.

Democratizing AI

Critically, Wolf sees open-source AI and small models as fundamentally “democratizing” the technology. He, like many, is disturbed by the extent to which AI has simply reinforced the power of large technology giants, such as Microsoft, Google, Amazon, and, yes, Meta, even though it has arguably done more for open source AI than anyone else.

While OpenAI and, to a lesser extent, Anthropic, have emerged as key players in the development of frontier AI capabilities, they have only been able to do so through close partnerships and funding relationships with tech giants (Microsoft in the case of OpenAI; Amazon and Google in the case of Anthropic). Many of the other companies working on proprietary LLMs—Inflection, Character.ai, Adept, Aleph Alpha, to name just a few—have pivoted away from trying to build the most capable models.

The only way to ensure that just a handful of companies don’t monopolize this vital technology is to make it freely available to developers and researchers as open-source software, Wolf said. Open-source models—and particularly small open-source models—also gave companies more control over how much they were spending, which he saw as critical to businesses actually realizing that elusive return on investment from AI.

Safer in the long run

I pressed Wolf about the security risks of open-source AI. He said other kinds of open-source software—such as Linux—have wound up being more secure than proprietary software because there are so many people who can scrutinize the code, find security vulnerabilities, and then figure out how to fix them. He said he thought that open-source AI would prove to be no different.

I told Wolf I was less confident than he was. Right now, if an attacker has access to a model’s weights, it is simple to create prompts—some of which might seem like gibberish to a human—designed to get that model to jump its guard rails and do something it isn’t supposed to, whether that is coughing up proprietary data, writing malware, or giving the user a recipe for a bioweapon.

What’s more, research has shown that an attacker can use the weights from open-source models to help design similar “prompt injection” attacks that will also work reasonably well against proprietary models. So the open models are not just more vulnerable, they are potentially making the entire AI ecosystem less secure.

Wolf acknowledged that there might be a tradeoff—with open models being more vulnerable in the near term until researchers could figure out how to better safeguard them. But he insisted that in the long-term, having so many eyes on a model would make the technology more secure.

Openness, on a spectrum

I also asked Wolf about the controversy over Meta’s labelling of its AI software as open source, when open source purists criticize the company for placing some restrictions on the license terms of its AI models and also for not fully disclosing the datasets on which its models are trained. Wolf said that it was best to be less dogmatic and to think of openness existing on a spectrum, with some models, such as Meta’s, being “semi-open.”

Better benchmarks

One of the things Hugging Face is best known for is its leaderboards, which rank open-source models against one another based on their performance on certain benchmarks. While the leaderboards are helpful, I bemoaned the fact that almost none exist that seek to show how well AI models work as an aid to human labor and intelligence. It is in this “copilot” role that AI models have found their best uses so far. And yet there are almost no benchmarks for how well humans perform when assisted by different AI software. Instead, the leaderboards always pit the models against one another and against human-level performance—which tends to frame the technology as a replacement for human intelligence and labor.

Wolf agreed that it would be great to have benchmarks that looked at how humans do when assisted by AI—and he noted that some early models for coding did have such benchmarks—but he said these benchmark tests were more expensive to run since you had to pay human testers, which is why he thought few companies attempted them.

Making money

Interestingly, Wolf also told me Hugging Face is bucking a trend among AI companies: It’s cashflow positive. (The company makes money on consulting projects and by selling tools for enterprise developers.) By contrast, OpenAI is thought to be burning through billions of dollars. Maybe there really is a profitable future in giving AI models away.

With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news. If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my book, Mastering AI: A Survival Guide to Our Superpowered Future. It’s out now in the U.S. from Simon & Schuster, and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.

AI IN THE NEWS

A Nobel Prize for neural network pioneers Hinton and Hopfield. The Royal Swedish Academy of Sciences awarded the Nobel Prize in physics to deep learning “godfather” Geoffrey Hinton and machine learning pioneer John Hopfield for their work on the artificial neural networks that underpin today’s AI revolution. You can read more from my Fortune colleague David Meyer here.

Meta debuts movie generation AI model. The social media company unveiled Movie Gen, a powerful generative AI model that can create high-quality short videos from text prompts. Text prompts can also be used to edit the videos and the model can automatically create AI-generated sound effects or music appropriate to the scene—an advance over other text-to-video software that has so far only been able to create videos without sound, the New York Times reported. The model will compete with OpenAI’s Sora, Luma’s Dream Machine, and Runway’s Gen 3 Alpha models.

Another OpenAI researcher jumps ship—this time to Google DeepMind. Tim Brooks, who co-led the development of OpenAI’s text-to-video generation model, Sora, announced on X that he was leaving OpenAI to join Google DeepMind. Brooks joins a growing list of prominent OpenAI researchers who have left the company recently. TechCrunch has more here.

Amazon deploys an AI HR coach. That’s according to a story in The Information, which quotes Beth Galetti, Amazon’s senior vice president of people experience and tech, speaking at a conference. She said the company trained a generative AI model on employee performance reviews and promotion assessments to act as a coach for employees seeking advice on the best way to approach difficult conversations with managers or direct reports.

OpenAI is drifting away from Microsoft for its data center demands. The Information reports, quoting people who have heard OpenAI CEO Sam Altman and CFO Sara Friar discussing plans to reduce the company’s dependence on Microsoft’s GPU clusters. OpenAI recently signed a deal to rent time on GPUs in a data center in Abilene, Texas, that’s being developed by Microsoft rival Oracle. The publication said OpenAI is concerned Microsoft is unable to give it access to enough data center capacity for it to stay apace of competitors, particularly Elon Musk’s X.ai. Musk has recently boasted about creating one of the world’s largest clusters of Nvidia GPUs.

EYE ON AI RESEARCH

Maybe next token prediction works for everything? Transformers that just predict the next token in a sequence have proven remarkably powerful for constructing large language models (LLMs). But for text-to-image, text-to-video, and text-to-audio generation, other methods have usually been used, often in combination with an LLM. For images, this is often a diffusion model, where the system learns to take an image that has been distorted and blurred with statistical noise and then remove that noise to restore the original crisp image. Sometimes this is what is called a compositional technique, where the model learns from images with text labels. But researchers at the Beijing Academy of Artificial Intelligence have published a paper that shows simply training a model to predict the next token and training it on multimodal data that includes text, still images, and video, can produce an AI model that is just as good as those trained in a more complicated way. The researchers call their model Emu3. You can read the research paper on arxiv.org here and see a blog with examples of its outputs here.

FORTUNE ON AI

Meet the former Amazon VP driving Hershey’s tech transformation —by John Kell

Doctors and lawyers, need a side hustle? Startup Kiva AI pays crypto to overseas experts who contribute to its ‘human-in-the-loop’ AI service —by Catherine McGrath

Why Medtronic wants every business unit to have a plan for AI —by John Kell

Google DeepMind exec says AI will increase efficiency so much it’s expected to handle 50% of info requests in its legal department —by Paolo Confino

AI assistants are ratting you out for badmouthing your coworkers —by Sydney Lake

AI CALENDAR

Oct. 22-23: TedAI, San Francisco

Oct. 28-30: Voice & AI, Arlington, Va.

Nov. 19-22: Microsoft Ignite, Chicago

Dec. 2-6: AWS re:Invent, Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024, Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI, San Francisco (register here)

BRAIN FOOD

If Trump wins, will we see a Manhattan Project to build AGI and ASI? Some people think so after noticing former President Donald Trump’s daughter Ivanka post approvingly on social media about a monograph published by former OpenAI researcher Leopold Aschenbrenner. On Sept. 25, Ivanka posted on X that Aschenbrenner’s book-length treatise, “Situational Awareness,” was “an excellent and important read.”

In the document, which Aschenbrenner published online in June, he predicts that OpenAI or one of its rivals will achieve artificial general intelligence (AGI) before the decade is out, with 2027 being the most likely year. He also says the U.S. and its allies must beat China in the race to develop AGI and then artificial superintelligence (ASI), an even more powerful technology that would be smarter than all humanity combined. The only way to guarantee this, Aschenbrenner argues, is for the U.S. government to get directly involved in securing the leading AI labs and for it to launch a government-led and funded Manhattan Project-like effort to develop ASI.

So far, the Republican Party’s platform when it comes to AI has been heavily influenced by the Silicon Valley venture capitalists most closely affiliated with the e/acc movement. Its believers espouse the idea that the benefits of superpowerful AI so outweigh any risks that there should be no regulation of AI at all. Trump has promised to immediately rescind President Joe Biden’s executive order on AI, which imposed reporting and safety requirements on the companies working on the most advanced AI models. It would be ironic then, if Trump wins the election and, influenced by Ivanka’s views, and in turn Aschenbrenner’s, he actually winds up nationalizing the AGI effort. I wonder what Ivanka’s brother-in-law, Joshua Kushner, the managing partner at Thrive Capital, which just led OpenAI’s record-breaking $6.6 billion funding round, thinks about that idea?



Source link

About The Author

Scroll to Top