A Brief History of ASR: Automatic Speech Recognition

Our friends at Descript begin a series on the evolution of ASR with the piece below. We at Speakeasy AI are excited about how our revolutionary approach to conversational AI via speech-to-intent ™ will mold the ASR landscape and help enable the future of what can be done with voice. – Frank Schneider, CEO, Speakeasy AI   by Jason Kincaid, @ Descript. This moment has been a long time coming. The technology behind speech recognition has been in development for over half a century, going through several periods of intense promise — and disappointment. So what changed to make ASR viable in commercial applications? And what exactly could these systems accomplish, long before any of us had heard of Siri? The story of speech recognition is as much about the application of different approaches as the development of raw technology, though the two are inextricably linked. Over a period of decades, researchers would conceive of myriad ways to dissect language: by sounds, by structure — and with statistics.

Early Days

Human interest in recognizing and synthesizing speech dates back hundreds of years (at least!) — but it wasn’t until the mid-20th century that our forebears built something recognizable as ASR. 1961 — IBM Shoebox Among the earliest projects was a “digit recognizer” called Audrey, created by researchers at Bell Laboratories in 1952. Audrey could recognize spoken numerical digits by looking for audio fingerprints called formants — the distilled essences of sounds. In the 1960s, IBM developed Shoebox — a system that could recognize digits and arithmetic commands like “plus” and “total”. Better yet, Shoebox could pass the math problem to an adding machine, which would calculate and print the answer. Meanwhile researchers in Japan built hardware that could recognize the constituent parts of speech like vowels; other systems could evaluate the structure of speech to figure out where a word might end. And a team at University College in England could recognize 4 vowels and 9 consonants by analyzing phonemes, the discrete sounds of a language. But while the field was taking incremental steps forward, it wasn’t necessarily clear where the path was heading. And then: disaster. October 1969 The Journal of the Acoustical Society of America

A Piercing Freeze

The turning point came in the form of a letter written by John R. Pierce in 1969. Pierce had long since established himself as an engineer of international renown; among other achievements he coined the word transistor (now ubiquitous in engineering) and helped launch Echo I, the first-ever communications satellite. By 1969 he was an executive at Bell Labs, which had invested extensively in the development of speech recognition. In an open letter³ published in The Journal of the Acoustical Societyof America, Pierce laid out his concerns. Citing a “lush” funding environment in the aftermath of World War II and Sputnik, and the lack of accountability thereof, Pierce admonished the field for its lack of scientific rigor, asserting that there was too much wild experimentation going on: “We all believe that a science of speech is possible, despite the scarcity in the field of people who behave like scientists and of results that look like science.” — J.R. Pierce, 1969 Pierce put his employer’s money where his mouth was: he defunded Bell’s ASR programs, which wouldn’t be reinstated until after he resigned in 1971.

Progress Continues

Thankfully there was more optimism elsewhere. In the early 1970s, the U.S. Department of Defense’s ARPA (the agency now known as DARPA) funded a five-year program called Speech Understanding Research. This led to the creation of several new ASR systems, the most successful of which was Carnegie Mellon University’s Harpy, which could recognize just over 1000 words by 1976. Meanwhile efforts from IBM and AT&T’s Bell Laboratories pushed the technology toward possible commercial applications. IBM prioritized speech transcription in the context of office correspondence, and Bell was concerned with ‘command and control’ scenarios: the precursors to the voice dialing and automated phone trees we know today. Despite this progress, by the end of the 1970s ASR was still a long ways from being viable for anything but highly-specific use-cases. This hurts my head, too.

The ‘80s: Markovs and More

A key turning point came with the popularization of Hidden Markov Models(HMMs) in the mid-1980s. This approach represented a significant shift “from simple pattern recognition methods, based on templates and a spectral distance measure, to a statistical method for speech processing”—which translated to a leap forward in accuracy. A large part of the improvement in speech recognition systems since the late 1960s is due to the power of this statistical approach, coupled with the advances in computer technology necessary to implement HMMs. HMMs took the industry by storm — but they were no overnight success. Jim Baker first applied them to speech recognition in the early 1970s at CMU, and the models themselves had been described by Leonard E. Baum in the ‘60s. It wasn’t until 1980, when Jack Ferguson gave a set of illuminating lectures at the Institute for Defense Analyses, that the technique began to disseminate more widely. The success of HMMs validated the work of Frederick Jelinek at IBM’s Watson Research Center, who since the early 1970s had advocated for the use of statistical models to interpret speech, rather than trying to get computers to mimic the way humans digest language: through meaning, syntax, and grammar (a common approach at the time). As Jelinek later put it: “Airplanes don’t flap their wings.” These data-driven approaches also facilitated progress that had as much to do with industry collaboration and accountability as individual eureka moments. With the increasing popularity of statistical models, the ASR field began coalescing around a suite of tests that would provide a standardized benchmark to compare to. This was further encouraged by the release of shared data sets: large corpuses of data that researchers could use to train and test their models on. In other words: finally, there was an (imperfect) way to measure and compare success. November 1990, Infoworld

Consumer Availability — The ‘90s

For better and worse, the 90s introduced consumers to automatic speech recognition in a form we’d recognize today. Dragon Dictate launched in 1990 for a staggering $9,000, touting a dictionary of 80,000 words and features like natural language processing (see the Infoworld article above). These tools were time-consuming (the article claims otherwise, but Dragon became known for prompting users to ‘train’ the dictation software to their own voice). And it required that users speak in a stilted manner: Dragon could initially recognize only 30–40 words a minute; people typically talk around four times faster than that. But it worked well enough for Dragon to grow into a business with hundreds of employees, and customers spanning healthcare, law, and more. By 1997 the company introduced Dragon NaturallySpeaking, which could capture words at a more fluid pace — and, at $150, a much lower price-tag. Even so, there may have been as many grumbles as squeals of delight: to the degree that there is consumer skepticism around ASR today, some of the credit should go to the over-enthusiastic marketing of these early products. But without the efforts of industry pioneers James and Janet Baker (who founded Dragon Systems in 1982), the productization of ASR may have taken much longer. November 1993, IEEE Communications Magazine

Whither Speech Recognition— The Sequel

25 years after J.R. Pierce’s paper was published, the IEEE published a follow-up titled Whither Speech Recognition: the Next 25 Years⁵, authored by two senior employees of Bell Laboratories (the same institution where Pierce worked). The latter article surveys the state of the industry circa 1993, when the paper was published — and serves as a sort of rebuttal to the pessimism of the original. Among its takeaways:
  • The key issue with Pierce’s letter was his assumption that in order for speech recognition to become useful, computers would need to comprehend what words mean. Given the technology of the time, this was completely infeasible.
  • In a sense, Pierce was right: by 1993 computers had meager understanding of language—and in 2018, they’re still notoriously bad at discerning meaning.
  • Pierce’s mistake lay in his failure to anticipate the myriad ways speech recognition can be useful, even when the computer doesn’t know what the words actually mean.
The Whither sequel ends with a prognosis, forecasting where ASR would head in the years after 1993. The section is couched in cheeky hedges (“We confidently predict that at least one of these eight predictions will turn out to have been incorrect”) — but it’s intriguing all the same. Among their eight predictions:
  • “By the year 2000, more people will get remote information via voice dialogues than by typing commands on computer keyboards to access remote databases.”
  • “People will learn to modify their speech habits to use speech recognition devices, just as they have changed their speaking behavior to leave messages on answering machines. Even though they will learn how to use this technology, people will always complain about speech recognizers.”

The Dark Horse

In a forthcoming installment in this series, we’ll be exploring more recent developments and the current state of automatic speech recognition. Spoiler alert: neural networks have played a starring role. But neural networks are actually as old as most of the approaches described here — they were introduced in the 1950s! It wasn’t until the computational power of the modern era (along with much larger data sets) that they changed the landscape. But we’re getting ahead of ourselves. Stay tuned for our next post on Automatic Speech Recognition by following Descript on Medium, Twitter, or Facebook.     This article is originally published at Descript.

Why Speech-to-Intent?

Speech to Intent vs Speech to Text We are often asked “what is the difference between speech to text vs speech to intent?” Our patented speech to intent system utilizes AI to analyze the entire audio file – a complete voice utterance – to get to the right intent. This methodology brings intelligence to the way we speak, which is far different than the way we type. We created our speech to intent solution due to the experience we had in the AI/chatbot space and the exciting opportunity we uncovered. What’s the Opportunity? Over the last five years chat bots have gone from a let’s try tool to a must-have part of a business CX roadmap. It has been proven that digital AI solutions can answer over 30% of the questions that would have ended up at the call center, but many customers never try the chat bot, they just call. In fact, over 70% of customer conversations are still using voice. There is a vast opportunity to use AI to answer these questions within voice channels. What’s the problem? We have all seen speech to text systems vastly improve over the last three years. Microsoft’s research has seen human parity results in transcription. It would seem simple to just use speech to text to connect IVR to AI solutions like chat bots and get all the benefits realized in the digital world. In reality, this approach does not deliver as expected. Transcription can create a type of interaction, but transcription is not actually intelligent. Speakeasy AI’s solution is Speech to Intent™.  
    Speakeasy AI Speech to Intent Benefits
Method of operation The recognition process is divided into a pipeline of different micro-services that is mapped directly onto the corpus used to train the system. Our speech to intent system bypasses the issues of traditional speech to text where accents or poor audio signal affects the outcome greatly. The Speech to Intent system only matches against known content in the AI system, giving a much better match percentage.
Implementation The existing corpus of alternates   in the AI system is used to set up the Speech to Intent system. The system can be setup quickly because the content already exists. All of the special words and products are immediately recognized by the system.
Accuracy   We have seen over 80% accuracy in testing with a well-developed corpus.
Maintenance Within the Speakeasy AI admin console it is easy to see questions that have been misunderstood and instantly assign alternates to improve the accuracy. Additional content can be added and in production within minutes. The matching can be set up specifically to understand what is being said in the context of the business deployment. Maintenance is instant and completely controlled by the company.
Resources The SpeakeasyAI Speech to Intent system requires fewer resources than speech to text systems.
  Our mission is to make it easier for businesses to understand and respond to their customers’ needs in voice with AI. We accomplish this mission by using the world’s first and only Speech-to-Intent™ solution. Combined with our end-to-end reporting, our solution provides real-time insights into understanding customers’ intents, needs and outcomes. And since an AI platform is only as good as it’s improvement cycle, we enable rapid updates to ensure wins are delivered on the day you launch. With our voice AI solutions and our team’s proven expertise, we work tirelessly to provide better voice experiences and deliver understanding as a service.
© SpeakEasyAI. All rights reserved.
Powered by Brand Revive.
What do you think of my pop up?