
ChatGPT is an artificial intelligence chatbot which gained an enormous popularity these last days. People all over the world are arguing that it would indeed change the whole landscape by eliminating and creating new jobs due to its powerfulness.
It doesn’t matter what your profession is, ChatGPT can be helpful in some way or the other to make your job easier, or maybe even replace you, ouch!
Some people even started to using it as teacher or a companion to chat with instead of researching the internet for the information they need. It can provide code, configurations, best practices, among others. While you and your team should of course make use of such technology, you should be careful while using it! You should indeed treat it as a hostile place and any information you pass to it can be read be anyone in the internet.
Yes, some people just don’t get that and started passing very sensitive and proprietary information to it, what happens if there’s a breach and someone can access all of these data and inquires? What happens if the information you provided being actually used as response to others? While we haven’t heard about breaches in OpenAI systems itself, unfortunately, the latter has already been reported!
Some reports allegedly claimed that employees in Samsung and Amazon have exposed proprietary and sensitive information while using ChatGPT and their data were being provided to others as response! Can you imagine how harmful and dangerous it is if someone casually asking ChatGPT about something and your internal company’s information being provided as a public answer?
While it’s really hard to forecast the full dangers and risks of ChatGPT and AI in general, in the meantime, we can at least follow some good old-fashion operational security to keep us and our data safe.
- First, when you create an account, it must not be attributed to you or your organization in anyway whatsoever. Take must be taken, you should use random names, virtual phone numbers and preferably, treat as you would while doing an OSINT investigation! Get rid of any trails that could be tracked back to you.
- Second, you should not put any proprietary or sensitive information. Try to be as much generic as you can when dealing with inside sensitive information that you need help with.
- Third, if you’re using it to adjust code, always remember to remove any API keys or credentials. Some developers threw a whole production code with static credentials inside, it’s very clear why would that be a stupid thing to do.
At last, we’re in a transitioning era. AI will inevitably change our lives and we just can’t predict now how it would eventually turn out to b. Until that happens, let’s at least be very careful about what we can control and protect.