Re: Law360 - Epiphany, RK
in response to
by
posted on
Jan 25, 2015 12:28AM
To further illustrate this in a more realistic simple example look at my Dragon Dictate. I can finish a sentance using its VR, and it may make a mistake on a word, say it's the word digital, it may have typed 'digit, or directional; I say 'Sellect digit or directional;' it automatically jumpts to the frst occurance of the word fitting that phenom; and pops up a list of words it thinks may be right for me to choose from; 80% time 'digital' is in there to choose from. I simply say 'chose digital' and it put in in correctly and that sets in motion analitics that try beter match for the next time I say digital... in essence that is what the CA servers will be doing with algorithms and sensors.
This is the data segments asssociated with that particular codec phenoms oscilloscopic bit representation, set as a key to my voice and or the english language phenom/.
We knew edig had a special project wit LSHP before it became what Nuance is today; actually nefor the internet anf flash mp3... remember RPs comments about reading the WSJ by requesting it on the PDA, well I'll take it one step further and say you can/will be able to also request specific catagories of news. CA will know by your habits.
Are you listening Mr.Handal; emit is speaking revelation knowledge here.
It was furter exemplified by e.Digitals ability to put up to 100k VR calls on MXP100 with Lucent way before IBMs could get anywhere near that recognition ability with viavoice.
CQuence, sorta say's it all, like embedded.Digital
emit...