In your videos, do you blur or pixelate faces? If so, do you do it to avoid being recognized on the street, or for something even more serious like protecting a model's identity?
Either way, I think it's been obvious for years that some sort of AI will eventually be able to un-censor the content and show what was underneath the obfuscation. There are already plenty of examples of this out there, particularly in Japanese content where genitals are usually censored. One day it'll perhaps be able to recreate the footage to an even higher degree of resolution than the original un-redacted footage!
And if they can do facial recognition that means they can do hand recognition. Feet recognition. Voice recognition. Mannerism recognition. IMO we're only a couple of years out from AI being able to figure out who's in the footage no matter what we do, if we aren't already there now.
If AI will be able to decode all attempts at obfuscating footage, then we must consider all footage past and present as not being redacted at all, and maybe rethink whether we want it up or not.
And if they can do facial recognition that means they can do hand recognition. Feet recognition. Voice recognition.
Voice has been done to stunning effect. There was a site out there where you could literally make Jordan Peterson say anything you wanted.
Still exists, but mostly non-functional now.
https://notjordanpeterson.com/ In combination with visual AI (progression of deep fake tech), within 1 year (probably already with government tech) you can make your own YouTube videos with motion pictures of whoever you want, complete with their voices and emotional inflections.
And I'm sure it will never be used for anything dishonest, ever.
Messmaster said: In your videos, do you blur or pixelate faces? If so, do you do it to avoid being recognized on the street, or for something even more serious like protecting a model's identity?
Either way, I think it's been obvious for years that some sort of AI will eventually be able to un-censor the content and show what was underneath the obfuscation. There are already plenty of examples of this out there, particularly in Japanese content where genitals are usually censored. One day it'll perhaps be able to recreate the footage to an even higher degree of resolution than the original un-redacted footage!
And if they can do facial recognition that means they can do hand recognition. Feet recognition. Voice recognition. Mannerism recognition. IMO we're only a couple of years out from AI being able to figure out who's in the footage no matter what we do, if we aren't already there now.
If AI will be able to decode all attempts at obfuscating footage, then we must consider all footage past and present as not being redacted at all, and maybe rethink whether we want it up or not.
Thoughts?
I completely agree, I think with a little application you could feed AI a number of images where you can see a tiny portion of someone's face and it'll stitch it all together.
I'm not sure of the process behind taking a blur away, you've got me thinking, I'll ask some of my contacts and see what I can find.
I think it's a case of the tech is already here just not applied, it would take huge reference databases, wouldn't it?
shinyrainwear said: it would take huge reference databases, wouldn't it?
Yep, of the kind we've all been busily feeding via Facebook and Google over the last fifteen years or so. They already exist and will already have been mined to the max by security agencies friendly and otherwise.
Privacy is a 20th century concept. Most people just haven't woken up to that yet.
I actually had this run though my mind not so long ago and the potential it could happen one day, being that AI could eventually be able to figure us out and who we are.
Not the most pleasant thought is it? So now would be a good time for those of us who face blur/hide to review what we have up on here and trim any hints and clues that Skynet could potentially zero in on.
....Either way, I think it's been obvious for years that some sort of AI will eventually be able to un-censor the content and show what was underneath the obfuscation.....
I don't think it's inevitable. Sure, computers are capable of crunching data at light speed, but it can't extrapolate data out of nothing. There has to be some kind of data or saved information for it to come up with a result.
Messmaster said: .....
...And if they can do facial recognition that means they can do hand recognition. Feet recognition. Voice recognition. Mannerism recognition. IMO we're only a couple of years out from AI being able to figure out who's in the footage no matter what we do, if we aren't already there now. ...
We humans have the ability to recognize others because we have the ability to remember things. But for us to remember something, we first need to have a previous experience, to remember. AI tech kinda works in a similar way. It can recognize a face only because that face was previously uploaded into it's memory, it can then make comparisons against all other data stored in it's memory. But without known data, AI can't recognize any one. Just as you won't recognize someone you never met.
The real advantages of AI facial recognition is that it can process billions of bits of information, in the wink of an eye, and it can never forget.....BUT! It can make mistakes. There are many examples of people being misidentified by facial recognition because computers, like people are imperfect and prone to error.
Messmaster said: .....
...If AI will be able to decode all attempts at obfuscating footage, then we must consider all footage past and present as not being redacted at all, and maybe rethink whether we want it up or not.
I honestly do not believe that AI technology could "decode" anything unless the info is present to decode. What I mean is this. When you take a photo with a digital camera, that image will also contain a bunch of data we don't see, that is embedded with the image. Now, it maybe possible that AI tech could use this info to recreate an unmolested image. But that data has to first be there. It cannot make up something out of nothing. Now if you took a photo back in the day when you had nothing but film, there is no other embedded data. So an AI algorithm could not recreate or extrapolate a true and accurate image. UNLESS there is another image, somewhere which wasn't altered, which can then be used to extrapolate a true and accurate rendering.