Subscribe to our newsletter

Join our subscriber list to get the latest news, updates and special offers delivered directly in your inbox.

AI Applications: Should You Worry About Your Data and Privacy?

As AI apps become more popular for fun and work, it’s important to know how your data and privacy could be at risk.

You might think that using AI apps to enhance your social media photos, change your look into vintage-style high school yearbook photos in the 90s, and see what your future baby would look like would be harmless and amusing. And you might think that the data you are uploading in tools like ChatGPT can help you with your productivity. But what if we told you that by doing so, you are exposing yourself to serious risks of data and privacy violations? Find out how AI apps can threaten your security and what you can do to protect yourself.

The Defense Secretary of the Philippines took a significant step on October 14th by imposing a ban on the use of AI image generator applications within the Armed Forces of the Philippines (AFP) and the Department of National Defense (DND). The National Bureau of Investigation (NBI) is also warning the public about the carefree use of yearbook AI apps, citing potential risks that come with their use. Moreover, Samsung prohibits the use of ChatGPT in the workplace after reports emerged of employees exploiting the tool to troubleshoot company source code issues. Finally, institutions like Goldman Sachs, Citi Group, the Italian Government, and others have also enacted bans on ChatGPT due to mounting privacy apprehensions.

The question arises: Is the ban on AI tools justified, or is it an overreaction? Should we be genuinely concerned about the growing use of these AI applications?

Cambridge Analytica Scandal

The Cambridge Analytica Scandal stands out as a cautionary tale, underscoring the potential risks associated with AI and data privacy. In this infamous case, the British consulting firm Cambridge Analytica unlawfully collected personal data from millions of Facebook users, often without their consent, primarily for political advertising purposes.

Cambridge Analytica harvested data through an app known as “This Is Your Digital Life,” developed by data scientist Aleksandr Kogan and his company Global Science Research in 2013. The app aimed to construct psychological profiles of users and gathered personal information from the friends of these users through Facebook’s Open Graph platform. Ultimately, the app amassed data from up to 87 million Facebook profiles, allegedly providing analytical support to Donald Trump’s 2016 presidential campaign and being accused of interfering in the 2016 Brexit referendum.

Social media and AI have turned us into highly expoited products — for free.

The point to highlight here is that, back then, the app seemed to be a harmless personality quiz. Facebook users used the app for entertainment without realizing the broader implications of their data sharing. This situation mirrors the current trend of using AI applications for entertainment purposes.

The Main Concern

To comprehend the potential dangers associated with these AI applications, we must first grasp how AI operates. We have delved into this topic extensively in a previous article, but in essence, AI requires substantial volumes of data to develop its engines and models.

As an AI engineer myself, I often encounter challenges with clients related to data collection, a fundamental requirement for creating effective AI models. If one were to design a facial recognition app, a substantial dataset of photos would be needed for the system to be reliable. Furthermore, creating fake videos or photos of an individual necessitates a sample size of their images in the range of 8 photos and above like the Yearbook App that the NBI is warning us to use. Thus, using AI apps for entertainment offers developers access to data that can be repurposed for different uses, and this data is often freely shared by users who were just having fun using the app.

In the case of ChatGPT, the consumer services FAQ clearly states that the content submitted to the system may be utilized to improve its model’s performance. This implies that any confidential business information or sensitive code uploaded to the system may potentially be accessible to others.

How Microsoft addresses these concerns

Some companies like Microsoft offer a secure and compliant solution to address these concerns. Microsoft, an early investor in OpenAI (the company behind ChatGPT) provides a service called Azure OpenAI. This service ensures a secure and private environment for using ChatGPT and DALL-E, where the data you upload, be it confidential information or personal photos, remains inaccessible to the public or other Microsoft customers. In addition, the data you upload will never be used to improve the model’s performance nor improve any Microsoft 3rd party products or services. This closed system is exclusively accessible to you and your authorized employees.

Be Careful

In summary, one must exercise caution when dealing with personal and business data in the context of AI applications. It’s imperative to carefully read the terms and conditions, understand where your data will be stored, and assess the legitimacy of the company or individual behind the application. A company can be a fly by night only to close and use the data collected for other means. Skepticism is warranted, given the difficulty of establishing trust and verifying the handling of user data. In my opinion, it is prudent to prioritize established companies with a strong industry track record over newer entrants.

RELATED ARTICLES