Businesses including Stitch Fix are already experimenting with DALL-E 2 – TechCrunch

Startup Stories

[ad_1]

It’s been a few weeks since OpenAI began allowing customers to commercially use images created by DALL-E 2, an incredibly powerful AI text-to-image system. But despite current technical limitations and a lack of audio licenses, not to mention an API, some pioneers say they’re testing the system for various commercial use cases — pending the day DALL-E 2 goes into production. .

Stitch Fix, an online service that uses recommendation algorithms to personalize clothing, says it’s experimenting with the DALL-E 2 to visualize its products based on specific characteristics such as color, fabric and style. For example, if a Stitch Fix customer requests “high-rise, red, stretchy, skinny jeans” during the pilot, DALL-E 2 is tapped to generate images of that item, which a stylist can use to match the same. A product in Stitch Fix’s inventory.

“DALL-E 2 helps us visually display a product’s most informative features, ultimately helping stylists find exactly what the customer asked for in their written feedback,” a spokesperson told TechCrunch in an email.

Stitch Fix DALL-E 2

DALL-E 2nd generation from Stitch Fix’s pilot. The question: “Soft, olive green, great color, pockets, patterned, nice texture, long, cardigan.” Image Credits: Open AI

Of course, the DALL-E has 2 quirks – some of which are giving early corporate users pause. Eric Silberstein, VP of data science at e-commerce startup Clavio, lists mixed impressions about the system as a marketing tool in a blog post.

Note that on human models created by DALL-E 2, facial expressions are inappropriate and muscles and joints are disproportionate, and the system does not always understand instructions correctly. When Silberstein asked the DALL-E 2 to create an image of a candle against a gray background on a wooden table, the DALL-E 2 sometimes erased the lid of the candle and blended it into the table or added an asymmetrical border around the candle.

Dahl-E 2 Eric Silberstein

Silberstein’s experiments with DALL-E 2 for production visualization. Image Credits: Open AI

“Photos with people, and photos of human models of products, cannot be used as such,” Silberstein wrote. However, he said he would consider using the DALL-E 2 for tasks such as providing editing starting points and communicating ideas to graphic artists. “Without specific brand guidelines for stock photos and descriptions that don’t have people, DALL·E 2, to my expert eye, can now reasonably replace ‘the old way,'” continued Silberstein.

Editors at Cosmopolitan came to a similar conclusion when they teamed up with digital artist Karen X. Cheng to create the magazine’s cover using the DALL-E 2. Limitations of DALL-E 2 as an art generator descriptor.

But the strangeness of AI sometimes works – as a feature, rather than a bug. For its Draw Ketchup campaign, Heinz DALL-E 2 generated a series of images of ketchup bottles using natural language terms such as “ketchup,” “ketchup art,” “blurred ketchup,” “ketchup in space” and “ketchup renaissance.” The company invited fans to send in their own questions, which Heinz searched for and shared on his social channels.

Heinz Daehl-E 2

Heinz bottles with DALL-E 2 as “Thinking”, part of Heinz’s latest advertising campaign. Image Credits: Open AI

“With AI images dominating news and social feeds, we saw a natural opportunity to extend our ‘Draw Ketchup’ campaign. Heinz is based on the understanding that the word is synonymous with ketchup – to test this concept in the AI ​​space, Heinz senior brand manager Jacqueline Chao said in a press release.

Obviously, DALL-E 2-based campaigns can work when AI is the subject. But several commercial users of DALL-E 2 say they’ve used the system to create assets that don’t carry the limitations of AI.

Software engineer Jacob Martin used DALL-E 2 to create an OctoSQL logo for the open source project he’s building. For around $30 – roughly the price of a logo design service on Fiverr – Martin finished a cartoon of an octopus that looks like it’s been revealed through human eyes.

“The end result isn’t ideal, but I’m very happy with it,” Martin wrote in a blog post. As far as the DALL-E 2 goes, I think it’s currently very much in the “first iteration phase” for most bits and purposes – the main exception being the lead designs; Those are great… I think the real breakthrough will come when DALL-E 2 becomes 10x-100x cheaper and faster.

DALL-E 2 OctoSQL

The OctoSQL logo created by DALL-E 2 after several tests. Image Credits: Open AI

One DALL-E 2 user — Don McKenzie, head of design at dev startup Defaven — took the idea a step further. He tried implementing the system to generate thumbnails on the company’s blog, motivated by the idea that captioned images would get more engagement than those without.

“As a small team of mostly engineers, we don’t have the time or budget to commission custom artwork for each of our blog posts,” Mackenzie wrote in a blog post. “Our approach so far has been to spend 10 minutes scrolling through stock websites with touch-related but ultimately inappropriate images, download something that isn’t terrible, slap it on the face, and print it.”

After spending a weekend and $45 in credit, McKenzie says he was able to replace 100 or so blog posts with DALL-E 2-generated images. He wanted to be done with the questions to get the best results, but McKenzie said the effort was well worth it.

“On average, I’d say it took two minutes and about four to five prompts on a blog post to find something I was happy with,” he wrote. “We were spending more money and time on monthly stock images, which resulted in worse results.”

There’s a startup trying to commercialize DALL-E 2’s asset-generating capabilities for companies that don’t have time for brainstorming questions. Unstock.ai, built on DALL-E 2, promises “high-quality images and illustrations on demand” – at no charge, for now. Customers enter a query (for example, “Top view of three goldfish in a bowl”) and select a preferred style for creating images (vector art, photorealistic, pencil), which can be cropped and resized.

Unstock.ai essentially automates rapid engineering, a concept similar to embedding a functional description in text in AI. The idea is to give detailed instructions to the AI ​​system to safely perform the requested task. In general, something like “a movie about a woman still drinking coffee, going to work, going on the phone” is more consistent than “walking with a woman”.

Potential applications can be stressful. When contacted for comment, OpenAI declined to share numbers around DALL-E 2 commercial users. Coincidentally, the demand seems to be there. Unofficial workarounds for DALL-E 2’s lack of APIs have sprung up all over the web, eagerly banded together to build the system into apps, services, websites, and video games.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *