3

Machine Learning Can Also Scale Misleading Terms, Unwanted Data Sharing, and Au...

 2 years ago
source link: https://hackernoon.com/machine-learning-can-also-scale-misleading-terms-unwanted-data-sharing-and-automated-bias
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
Machine Learning Can Also Scale Misleading Terms, Unwanted Data Sharing, and Automated Bias

Machine Learning Can Also Scale Misleading Terms, Unwanted Data Sharing, and Automated Bias

Machine Learning generates lots of biases and false positives, increasing discrimination and automating censorship. Big companies use questionable methods to get their Golden Goose models. The problem is neither algorithms nor whatever hardware. It's always the use of technology. Machine Learning is not evil. Tools are just means. We want Machine Learning to make us smarter and open the Human mind. The current usage is clearly unsatisfactory, sometimes acting as a massive confirmation bias.

Listen to this story

Speed:
Read by:
jmau111

Cybersecurity aware developer. pɹɐʍɹoɟ sǝʌoɯ sǝɯᴉʇ ʻʎɹoɯǝɯ ǝʞᴉๅuꓵ

Machine Learning is not evil. Tools are just means. Writing about privacy and security issues is tricky. Unaware people may react unexpectedly, and you might appear as the one that says "that's what they want you to believe" for any conversation topic. Thanks to whistleblowers, activists, hackers, and many more, the truth can be proved securely, and it's a game-changer.

The problem is neither algorithms nor whatever hardware. It's always the use of technology. Machine Learning could even help protect privacy, so there's no fatality.

However, to get their Golden Goose models, big companies use questionable methods. Machine Learning also generates lots of biases and false positives, increasing discrimination and automating censorship.

The incomprehensible terms and permissions

It's hard to understand privacy policies and permissions asked by applications, even for tech savvies. Big platforms have a tremendous capacity for explaining in legalese what they need.

IMHO, it's forced consent and it's problematic.

The user cannot determine whether the request is legitimate or not. It's not uncommon to get unclear and inconsistent parameters, so you can disable basic privacy settings without knowing it.

We don't want that. Be extra careful when you install an application. The list of permissions can be quite long, including access to all your contacts, messages, and activities, while unnecessary for the service.

Many people accept those crazy conditions because they think they are mandatory to use the application, while they aren't. Of course, if you use Facebook, it's impossible not to opt-in.

The not so innocent raw data

If you don't tweak the proper settings, your devices constantly send confidential data such as your location to external servers for analysis purposes.

However, the raw data must be processed and correlated to add value. Some believe the GAFAMs have already created the ultimate AI, a.k.a. Skynet, but in reality, you still need people to train algorithms and supervise models.

This supreme intelligence is based on the hard work of hundreds, perhaps thousands of people, working in poor conditions. It's called digital labor.

The Big Five and other technology leaders send their user's data to various third-party companies that often process records one by one to fix AI's mistakes and categorize data.

Automating biases

In 2015, Amazon realized his hiring tool based on artificial intelligence was actually discriminating against women. The models were trained to rate applicants according to patterns observed in resumes submitted to the company over 10 years.

The problem was most CVs came from men, so "Amazon's system taught itself that male candidates were preferable." Source: Reuters.

It's dangerous, as companies tend to believe Machine Learning's result blindly. A few years later, the political sphere is making the same mistake, which could aggravate discrimination in poor areas and make the police's work more complicated with many false positives.

Conclusion

We want Machine Learning to make us really smarter and open the Human mind. The current usage is clearly unsatisfactory, sometimes acting as a massive confirmation bias.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK