Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I actually have somewhat of an opposite opinion on this. As HN readers and being "in" the cutting edge front of tech, we know that things like this is possible (I first learned of this seeing Adobe demo it a while ago), but this is not mainstream knowledge yet.

The sooner we can get to a point where everybody knows stuff like this (voice impersonation) is possible, the sooner we can avoid real damages (of courts mis-judging with an impersonated voice recording as accepted evidence).

Yes, we lose an entire area of evidence that can be used in court (all voice recordings, possibly), but the tech was going to get here sooner or later and it was going to be a problem we'd have to deal with. I'd rather be at a place where everyone knows voice recordings are unreliable, than actually having harm done because of impersonated voices because people didn't think it was possible.



Avoiding court misjudgment is reasonably possible.

How we're going to fight against people believing whatever sound bytes from fake news they want to believe is a harder question...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: