Tainted Data Can Teach Algorithms The Wrong Lessons
My initial reaction this this article was "No kidding". Having done a bit of dabbling within this realm, pretty much all of the issues I have run into surrounding Artificial Intelligence can be traced to the data. When machines are subjected to the same biases of humans, is it really surprising that they would come to similar conclusions?
To draw a parallel that most on this site will comprehend, would you have adopted religion at a young age (if applicable) if someone trusted by you didn't share similar beliefs?
An unexpected consequence of looking into this was the realization that one could not help but to self reflect on their own biases if they wanted to approach this in any credible fashion. Of course, it isn't a current requirement for participation. But it becomes blatantly apparent when the algorithms start spiting out biased results (whatever the stench).
Which brings me to a couple questions for readers that dare tackle this issue. If someone is in the field, even better (feel free to enlighten me on how full of shit I am).
1.) Should the maturation of Artificial Intelligence algorithms be regulated?
No, were not going to be able to police the whole world. But as the common saying goes, are speeders justification for scrapping speed limits?
2.) Should private entities be allowed to create and curate proprietary black box algorithms?
We are already here.
3.) If one were to attempt to rein in the beast, how should/would one go about this?
Well, we are all evolved algorithms, complete with bias from those very evolutions. It is why we default to belief so oft, and why we see ready threats, those in our faces, but are less likely to see long range threats.
OUR algorithm evolved that way and we have to work around it.
So to some extent are we not the pot calling the kettle black?
Maybe education on the benefits of ethics? Come to think of it, this might even work for humans...
i work with a lot of data and i see a lot of circumstances where people ask for data to aggregated, joined and massaged and when they get the results they proclaim "well this can't be right" and the response is that is EXACTLY what the raw data you provided created. Often they have no idea what all the data they have is and if (not likely) it was actually recorded in a usable manner. People have a hard time grasping the idea that technology does what its told, no grey area.
Yes, like the use of derivatives formed from algorithms which are use to hide, confuse, and take advantage of others!!!
If you can not reign in derivatives, as a mode to control others monies!!!
Then How would anyone or any organization be able to control someone else's algorithms packed together in some sort of proprietary black box AI or IA as some form of non transparent derivative(s)???
Seems the genie is already out of the bottle!
I do not think anyone owns things like this. Your worries seem premature to me. Also, like most inventions/progress, it will prove to be both good & bad, so chill..........
Google, facebook, Huwaii, Amazon, the NSA, CIA, FBI, and so many independent developers.
I'd mandate that the data set used be made public if the outcome is also made public. See what other algorithms conclude, or even check the dataset for clear biases.
Yes, 100%, we need to replace/add to congress a technicate like a decade ago.
This is only acceptable if their are tons of entities constantly in competition with each other.
With other beasts who's only job is to prevent any one entity from becoming too powerful, thus maintaining balance.