I wish I had a tool that I could run concurrently with Dragon to measure performance and recognition accuracy over a session. I think recognition accuracy could be measured fairly reliably, but I' I am less sure how you would measure performance. I would love to see how many times the "?????" was displayed with a link to the audio that led to the distress signal. I would also like to see some sort of report on my corrections. It would be helpful to see a summary of my corrections to detect patterns and make corrections. I feel like I make the same errors & corrections over and over again, but I can never sit down at the end of a session and remember the details of my experience.
While I am in my dream world, I would love to be able to send my audio file to someone who could tell me what speech patterns I have that are problematic for speech recognition software. Sometimes I feel like I do mumble or slur. I've always been told that I am a fast talker, and sometimes I feel like that may get in my way with Dragon. I suspect that someone with some professional experience in voice/speech (not necessarily a speech pathologist, but maybe even a public speaking coach) could listen to me and point out unproductive speech patterns that I don't even began to hear.
This post is inspired by several weeks of poor quality user files. I have replaced my "pristine" backup user, and recognition & accuracy still sucks. Grump.
No comments:
Post a Comment