LinkedIn, the hub for professional networking and career development, is now under allegations for using personal data from its users to train AI models, raising privacy concerns on the platform. In their defence, the platform has insisted that the integration of AI is for enhancing business productivity and professional growth by analysing user data to match opportunities.
Such a move almost seems sneaky and the internet has been calling out the social networking site to clear its stance on such actions.
LinkedIn recently issued an official statement that users can opt out of training AI models through the site settings by clicking on “Data for Generative AI Improvement” to turn it off. “We believe that our members should have the ability to exercise control over their data, which is why we are making available an opt-out setting for training AI models used for content generation in the countries where we do this,”
While the update is all set to come into action from November 20th, users from Switzerland, EU or EEA will be automatically opted out thanks to their data protection laws unlike all other users who will be automatically opted in by default.
LinkedIn’s AI Integration: A Productivity Boost or Privacy Risk?
LinkedIn users can now use the “Write with AI” feature for content creation and posting on the platform which may seem convenient at first but raises eyebrows over user privacy concerns. LinkedIn CEO Ryan Roslansky recently appeared on the Big Technology Podcast and spoke about how AI would transform jobs by 2030 and LinkedIn’s AI approach is for providing personalised job recommendations for career advancements.
“No exaggeration, we have probably run at least 250 different AI feature experiments through our products over the last two years. And a ton of them have failed, but some of them are really catching on and helping people be more productive in what they’re trying to do on LinkedIn.” he spoke.
However, this is not the first time such applications are accused of misusing personal data under the pretext of research without directly informing their users. In LinkedIn’s case, the exact purpose behind the use of the personal data is still unclear while the so-called “control” seems just symbolic since the feature is enabled by default.
And while all this is said, it does not mean that the existing user data will be removed since it does not undo the training already conducted with the user data.
The larger debate remains around the use of personal data for AI development for applications in the coming future and will such practices create long term benefits as claimed for its user audiences while safeguarding their rights?