When Apple bans ChatGPT
1.
This is a reaction to an article made by Mashable.
2.
According to the news outlet, the company is concerned that employees who use programs like OpenAI's chatbot could leak confidential data.
I do think this is a valid point because most AI software use data from their users to train their models and fine-tune it. Especially, the danger comes if the storage holding all the history is leaked or breached from a hacker.
3.
Employees have also been advised not to use GitHub’s Copilot which is owned by Microsoft and uses OpenAI Code to automate the writing of software code, the report says.
I wonder if Apple is only banning AI tools from strictly Microsoft or anything that isn’t made by Apple. Guess one way to circumvent is to Google everything. One moment later, Apple employees go to Google Bard and ask “give me some hints about the upcoming Google Pixel 8.”
4.
Other institutions, like JPMorgan, Bank of America, and Citigroup have also banned ChatGPT to protect confidential information.
I like that all the examples are banking and financial companies. At least they take our privacy seriously.
5.
The chatbot, created by OpenAI, has recently struggled with private data breaches too.
When one company’s reputation goes down with a single/multiple breaches, it spreads faster than Covid-19. No more companies want to get infected and cause panic.
6.
Interestingly enough for Apple, OpenAI announced Thursday it was launching a ChatGPT app for Apple's own iOS.
What a genius way to cheat Apple by launching an app in Apple’s own store. Now, there’s no excuse for Apple to remove the app without facing backlash from the community, and it will be interesting to see if Apple employees download it on their own devices.
7.
That’s all for today. Check it out at https://mashable.com/article/apple-chatgpt-employee-ban-report
