Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Exclusive: PhotoRoom CEO on the critical ways it isn't like OpenAI or Midjourney

Fast Facts

  • TheStreet spoke with Matthieu Rouif, Photoroom's co-founder and CEO. 
  • Rouif explained Photoroom's approach when it comes to the ethics around its new image-generation tools, highlighting guardrails and filtration. 
  • The company announced a $43 million Series B funding round in February. 

Related: No, Elon Musk, AI self-awareness is not 'inevitable'

The ethical conundrum of GenAI

The past year has seen a proliferation both of artificial intelligence tools and ethical concerns about those tools. 

Among these concerns is one of massive copyright infringement, both in the input and output of generative AI tools. Such copyright concerns have fueled many of the civil lawsuits that have been filed against AI companies over the past year, coming from a mix of artists, writers and media companies against text and image generation firms from OpenAI to Midjourney and Stability. 

At the center of these concerns is a fundamental debate over whether it is fair to train commercial AI products using content without crediting, paying or asking the original creators. 

The other major ethical concern that has arisen alongside the recent growth in AI involves deepfakes and the harm they can cause, from disrupting electoral processes around the world to supercharging cybercrime and fraud efforts and sharpening the spear of online harassment. 

This is perhaps best exemplified by three recent occurrences: the viral, explicit deepfakes of Taylor Swift, the $25 million deepfake-powered corporate theft and the AI-generated robocall of President Joe Biden that encouraged voters not to participate in the New Hampshire primary. 

Each of these incidents occurred within weeks of one another. 

In the midst of this, a group of 20 AI and social media companies signed a voluntary "AI Elections Accord," a pact intended to mitigate the risks posed by their technology. 

More deep dives on AI:

Part of the pact involved a push to increase guardrails and watermark content, though the pact did not ban the creation or dissemination of misleading electoral content. 

Neither Microsoft  (MSFT) nor the Accord ever responded to TheStreet's detailed requests for comment regarding the Accord. 

In this environment of rapid growth and rapidly growing concerns, specifically at the start of one of the biggest global election years on record, AI-powered photo editing startup Photoroom is striving to handle safety and ethics a little differently. 

The French startup launched in 2019 primarily as a background remover; 150 million downloads later, Photoroom has been expanding its offerings as an AI-powered all-around photo editing app. The company closed a $43 million Series B funding round last month at a $500 million valuation, simultaneously launching a new suite of AI-powered editing tools built on a house-made, custom AI model. 

TheStreet sat down with Matthieu Rouif, Photoroom's co-founder and CEO, to discuss the company's approach to and perspective on the ethics and safety of generative AI. 

Related: Building trust in AI: Watermarking is only one piece of the puzzle

Photoroom's targeted approach

Photoroom's approach to safety is built around a simple intention to not manipulate the main subject of a given photo. 

The subject, Rouif said, "is always true."

"Especially because we come from the e-commerce space, the pixel of the main subject is never generated," he said. "Because where we come from, you don't buy something where it's not real. To build trust, it's important."

Rouif said that the main use case of Photoroom's image generation model is to generate "accessories and props" for a user's main image. The company's focus on "helping people grow their business," rather than building a text-to-image generator, Rouif said, allows Photoroom to play in a different space than Midjourney, OpenAI or Stablity, acting less as a direct competitor. 

An extension of that effort involves strict guardrails on the model, which prevent the generation of explicit or violent images, according to Rouif. He added that the model can't be used to generate artificial images of political figures. 

Photoroom also took steps during the training of its model, filtering its training set to ensure that the model does not generate such images. 

"We don't show these images to our model. The AI model doesn't see violence, it doesn't see not-safe-for-work images," he said. 

That filtered approach comes just a few months after researchers at Stanford identified hundreds of incidences of child sexual abuse material (CSAM) in LAION-5B, a popular open dataset that was used to train Stability's models, among others. 

Fine-tuning after the fact, Rouif said, isn't good enough.

"You can only be confident if you start from the beginning," he said. 

"Our model is the best in the world for what we call completion," Rouif added, referring to the creation of a professional-looking scene around an otherwise real product photo. He said that it's not designed for total image generation; it's meant to artificially simulate a professional studio environment. 

Beyond its guardrails and filtration efforts, Rouif said that Photoroom is looking into adopting watermarking efforts to ensure content provenance, though he said that the burden of content provenance must in part be shared by social media platforms, which can enable the spread of misleading images. Watermarking metadata — which can be stripped from an image with something as simple and well-intentioned as a screenshot — is not enough on its own, according to Rouif.

Photoroom has not yet enabled watermarking on its output. 

"We think regulation and mitigating is important here," he said, adding that harmful or misleading deepfakes should not be allowed to go viral on social media.  

Related: Microsoft engineer says company asked him to delete an alarming discovery

The dataset

Photoroom's training set comes from several different sources, including public images, images provided by consent from users and images purchased from photographers themselves. 

"We buy millions of images directly from photographers," Rouif said. 

Rouif declined to tell TheStreet how much money Photoroom has spent purchasing and licensing images, though he said that training a model costs tens of millions, "between the GPUs and the images," which cost a "fraction of that."

"It's some money, for sure," he said, though he did not provide specifics either in cost or quantity. 

Rouif did not explain other details of Photoroom's model, such as its size or the specifics of its training set. 

He said that copyright-infringing output is less of a problem for Photoroom than it is for the text-to-image generators, reiterating that Photoroom's goal is to specifically help small businesses with photography. Rouif said that people don't use Photoroom to replace artists or painters. 

"The main subject comes from the users. That limits a lot what the problem could be," Rouif said. "We want to do the good thing. We buy some photos, we're working on that, we're always trying to improve on that side. We want to sit on the right side. We're working with the photographer ... directly."

Copyright law has yet to be settled when it comes to AI-fueled infringement, either in the input or output of generative AI tools. It is a question at the core of many of the lawsuits that have been filed against generative AI companies. 

OpenAI, for instance, has never disputed its use of copyrighted material. The company has instead claimed that it is allowed to use such material under the "fair use" provision of copyright law, something that has yet to be clarified. 

Related: Deepfake program shows scary and destructive side of AI technology

Corporate responsibility in AI

Of the many debates that have erupted around generative AI is one concerning allocations of responsibility. 

Polling from the Artificial Intelligence Policy Institute (AIPI) has found that 84% of U.S. voters believe the companies behind models being used to generate harmful images ought to be held responsible. More than two thirds of respondents additionally supported legislation that would solidify that responsibility. 

"It's a little bit like you created a new chemical and accidentally dumped it in Lake Erie. And now the fish are dead," Daniel Colson, the executive director of AIPI, told TheStreet last month. "That's why the thing we've been suggesting is duties of care for model developers and liability if they aren't responsible with the technology that they're deploying."

Rouif, however, said that the "computer maker isn't responsible for the way (the computer) is used. At the end of the day, it's a tool. It's our role to mitigate and do our best."

"I think regulation is great. Today, the AI is very powerful and as every new powerful tool, there is going to be some bad usage and some amazing usage, like we are helping the economy. It's the person using it that needs to be responsible."

Contact Ian with tips and AI stories via email, ian.krietzberg@thearenagroup.net, or Signal 732-804-1223.

Related: The ethics of artificial intelligence: A path toward responsible AI

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.