[ad_1]
The explosion in audio and video content and interfaces over the last few years has been plain to see, but ways of dealing with all that media behind the scenes hasn’t quite caught up. AssemblyAI, powered by $ 28M in new funding, is aiming at becoming the go-to solution for analyzing speech, offering ultra-simple API access for transcribing, summarizing, and otherwise figuring out what’s going on in thousands of audio streams at a time.
Multimedia has become the standard for so many things in an incredibly short time: phone calls and meetings became video calls, social media posts became 10-second clips, chatbots learned to speak and understand speech. Countless new applications are appearing, and like any new and growing industry, people need to be able to work with the data those applications produce in order to run them well or build something new on top of them.
The problem is audio isn’t naturally easy to work with. How do you “search” an audio stream? You could look at the waveform or scrub through it, but more likely you’ll want to transcribe it first and then search the resulting text. That’s where AssemblyAI steps in: though there are numerous transcription services, it’s not often easy to integrate them into your own app or enterprise process.
“If you want to do content moderation, or search, or summarize audio data, you have to turn that data into a format that’s more pliable, and that you can build features and business processes on top of,” said AssemblyAI CEO and co- founder Dylan Fox. “So we were like, let’s build super-accurate speech analysis API that anyone can call, even at a hackathon – like a Twilio or Stripe style integration. People need a lot of help to build these features, but they don’t want to glue a bunch of providers together. ”
AssemblyAI offers a handful of different APIs that you can call extremely simply (a line or two of code) to perform tasks like “check this podcast for prohibited content,” or “identify the speakers in this conversation,” or “summarize this meeting into less than 100 words. ”
You may very well, as I was, be skeptical that a single small company can produce working tools to accomplish so many tasks so simply, considering how complex those tasks turn out to be once you get into them. Fox acknowledged that this is a challenge, but said that the tech has come a long way in a short span.
“There’s been a rapid increase in accuracy in these models, over the last few years especially,” he said. “Summary, sentiment identification… they’re all really good now. And we’re actually pushing the state of the art – our models are better than what’s out there, because we’re one of the few startups really doing large scale deep learning research. We’re going to spend over a million dollars on GPU and compute for R&D and training, in the next few months alone. ”
It can harder to grasp intuitively because it’s not so easily demonstrable, but language models have come along just as things like image generation (This ___ does not exist) and computer vision (Face ID, security cameras) have. Of course GPT-3 is a familiar example of this, but Fox pointed out that understanding and generating the written word is practically an entirely different research domain than analyzing conversation and casual speech. Thus, although the same advances in machine learning techniques (like transformers and new, more efficient training frameworks) have contributed to both, they’re like apples and oranges in most ways.
The result, at any rate, has been that it’s possible to perform effective moderation or summarizing processes on an audio clip a few seconds or an hour long, simply by calling the API. That’s immensely useful when you’re building or integrating a feature like, for example, short-form video – if you expect a hundred thousand clips to be uploaded every hour, what’s your process for a first pass at making sure they aren’t porn , or scams, or duplicates? And how long will the launch be delayed while you build that process?
Instead, Fox hopes, companies in this position will look for an easy and effective way forward, the way they might if they were faced with adding a payment process. Sure you could build one from scratch – or you could add Stripe in about 15 minutes. This not only is sort of fundamentally desirable, but it clearly separates them from the more complex, multi-service packages that define audio analysis products by big providers like Microsoft and Amazon.
The company already has hundreds of paying customers, having tripled revenue in the last year, and now processes a million audio streams a day. “We’re 100% live. There’s a huge market and a huge need, and the spend from customers is there, ”Fox said.
The $ 28MA round was “led by Accel, with participation from Y Combinator, John and Patrick Collison (Stripe), Nat Friedman (GitHub), and Daniel Gross (Pioneer).” The plan is to spread all those zeroes across recruitment, R&D infrastructure, and building out the product pipeline. As Fox noted, the company is spending a million on GPUs and servers in the next few months, a bunch of Nvidia A100s that will power the incredibly computation-intensive research and training processes. Otherwise you’re stuck paying for cloud services, so it’s better to rip that Band-Aid off early.
As for recruiting, I suggested that they might have a hard time staffing up in direct competition with the likes of Google and Facebook, which are of course working hard on their own audio analysis pipelines. Fox was optimistic, however, feeling that the culture there could be slow and stifling.
“I think there’s definitely a desire in really good AI researchers and engineers to want to work on the bleeding edge – and the bleeding edge in production, ”He said. “You come up with something innovative, and a few weeks later have it in production start a startup is the only place you can do stuff like that.”
[ad_2]
Source link