Computers have been taught to act and speak, but the problem is how to 【B1】 them to listen --to understand spoken words. 【B2】 speech recognition systems have been used by workers who need to enter data into a computer while their hands and eyes are otherwise 【B3】
The recognition of a limited vocabulary of single words from a single speaker is 【B4】 much a solved problem. Some systems, however, have trouble 【B5】 the noise of crumpling (弄皱,变皱) paper from the sound of a human 【B6】
Independent systems 【B7】 recognize the same word 【B8】 spoken by many different people can be built by sampling how several people say the same words 【B9】 storing several patterns for each word. But the chances for error are greater and the vocabulary must usually be 【B10】 . Even more difficult is continuous speech. People 【B11】 running words together and even changing some sounds when they speak, m the computer 【B12】 knows when one word ends and another starts, 【B13】 the process much more difficult. It was reported that it 【B14】 an hour on the largest computer in 【B15】 use to recognize one second of continuous speech.
Some 【B16】 have been made to improve continuous speech recognition 【B17】 giving the. computer some rules of grammar to restrict which words can 【B18】 which other words. A device has been developed that has a 5000-word vocabulary but requires the speaker to 【B19】 between words.
These devices, however crude at the moment, will represent the 【B20】 applications of what is considered a promising but extremely difficult technology.