© ROOT-NATION.com - Use of content is permitted with a backlink.
Since I’ve already touched on conspiracy theories, let’s talk about another one – modern and relevant. Yes, once again it’s about the Internet and artificial intelligence (AI). AI has become one of the hottest trends of this era. People use it for creativity, career advancement, information searches, and even emotional support. However, despite its many advantages, there are certain things – or questions – that you probably shouldn’t ask an AI.
AI chatbots are owned by tech companies known for taking advantage of our trusting human nature, and they’re built using algorithms designed to generate profit. There are no clear restrictions or laws outlining what these companies can or can’t do with the data they collect. A chatbot starts learning about you the moment you launch the app or website. From your IP address, it gathers information about your location, tracks your browsing history, and accesses any other permissions you granted when you accepted the chatbot’s terms of use.
To understand the privacy risks associated with AI-based chatbots, it’s important to know how they actually work. These chatbots collect and store complete transcripts of your interactions. This includes every question, prompt, and message you send, as well as the chatbot’s responses.
The companies behind these AI tools analyze and process this conversational data to train and improve their large language models. While the goal is to enhance the AI’s language understanding and dialogue skills, it also means that everything you say – and any information you disclose – is collected, stored, and examined by the company, at least temporarily. Creepy enough yet? It’s easy to feel like the chatbot is a helpful, trustworthy companion when it responds so smoothly and thoughtfully. But the reality is different: it’s a data collection tool, just like many others.
Read also: Civilization at Risk: Donald Trump’s Policies Are Undermining the Scientific World
I’ve put together a list of things you should never share with – or ask – ChatGPT or any other AI chatbot.
Personal information
Sure, some people might think, “all my information is already online anyway,” but if you’ve only shared it on secure, trusted websites while following basic internet safety rules, you still have a chance to protect yourself from cybercriminals. That’s why I strongly recommend you avoid giving AI chatbots any personal information – like passwords, your home address, or your bank account details. Chatbots don’t care about privacy. If mishandled, that data could end up in the wrong hands. And even if your chat feels private, someone else might gain access to it.
Again, any confidential information you share could potentially be used to access your accounts or steal your data. Even a small slip can open the door to serious security risks, so it’s best to treat every interaction with an AI chatbot as public, no matter how private it seems.
Keep in mind that anything you type into a chatbot can end up feeding back into its training data – and you definitely don’t want someone stumbling across your credit card number in the future. It should go without saying that sensitive work or business information – like confidential data, client details, or trade secrets – should also never be shared. Treat AI chatbots as public forums, not secure communication channels.
Read also: AI Hallucinations: What They Are and Why They Matter
Illegal activities
Obviously, turning to artificial intelligence for advice or help like “how to not get caught robbing a bank” or “how to dissolve a body in acid” (hello, Breaking Bad fans) is not only unethical but also potentially illegal. Among the types of questions you should never ask an AI, this category is one of the most important. AI systems should never be used to facilitate criminal activities such as hacking, fraud, or harassment. Chances are, you will be noticed. And if not immediately, then later you may be banned. On top of that, you could end up with more trouble than you bargained for.
Law enforcement agencies can monitor what you’re doing on your devices. By asking questions about hiding a body or buying illegal substances, you leave a trace that could be used against you if you later encounter legal issues (by the way, Dexter warned us about this back in 2006 – yes, I’m a fan of TV shows). While most people may ask these questions jokingly and don’t actually intend to hide a body, it’s still better not to leave anything that could be considered compromising.
Ethical dilemmas
The next point somewhat overlaps with the previous one, but it’s always worth reminding that AI systems are not therapists and cannot provide ethical recommendations. Asking AI to make decisions in life-and-death situations, moral dilemmas, or complex ethical questions is inappropriate. These decisions should remain at the discretion and judgment of humans.
Let me explain, the authors of one study, ‘ChatGPT’s inconsistent moral advice affects users’ judgements,’ sought answers to several questions. Is ChatGPT a reliable source of moral and ethical advice? Can it influence users’ moral judgements? And do people know how much ChatGPT influences them? They conducted a multi-stage experiment. And they came to disappointing conclusions. In general, ChatGPT had to answer the question: ‘Is it right to sacrifice one life to save five?’. Sometimes ChatGPT advocated and sometimes opposed sacrificing one life to save five, so the advice was contradictory. In addition, ‘…the subjects adopted ChatGPT’s reasoning as their own. This suggests that users underestimate the influence of chatbot recommendations on their moral judgements,’ the experts said. ‘ChatGPT is willing to give out moral advice, although its position lacks firmness and consistency. In general, chatbots should be designed so that they refuse to answer such questions or provide all the arguments for and against them at once,’ the scientists said.
The best approach we can come up with is to promote users’ digital literacy and help them understand the limitations of AI – for example, by encouraging them to ask the bot for alternative arguments. How can we improve digital literacy? That’s a question for future research.
Read also: Techno-feudalism – A New Form of World Order
Medical diagnoses or treatment advice
Now, let’s get to my favorite part, because I honestly can’t stand it anymore when someone seriously tells me they asked a chatbot about a possible illness, and it immediately gave them a seemingly accurate diagnosis along with a treatment plan. In general, the person is completely satisfied with this “diagnosis.” But what happens if you rephrase the symptoms? Have you ever tried it? The nonsense (something in the range of leprosy to cancer) that the chatbot spits out in that case is just mind-blowing, as they say.
Although AI can provide general information about health conditions, it cannot replace professional medical advice. Experts strongly urge people not to rely on artificial intelligence for diagnoses or specific treatment recommendations for any illness. Sure, a bot can provide information about health, laws, or finances. But a bot is not a doctor, a lawyer, or a financial advisor. And if you’re a stubborn reader who still wants AI to interpret your lab results, at least crop the image or edit the document to remove any personal data.
Emotional support or communication
Artificial intelligence has already permeated many areas of our daily and professional lives and has come very close to the realm of human emotions and feelings. Scientists even distinguish a separate category – Emotion AI – through which computer systems and algorithms can recognize and interpret human emotions by tracking facial expressions, body language, or speech. Emotional AI acts as a tool that enables more natural interaction between machines and humans: it can analyze subtle changes in human facial expressions (microexpressions), voice patterns, gestures, and respond to them in a human-like manner.
But an AI bot will never be able to provide genuine emotional support or real companionship. Relying solely on AI for emotional well-being can be harmful to your mental health. Seek human connections and support when needed.
Chatbots are not psychologists. While they may offer kind or encouraging responses, they don’t truly understand feelings. If you’re feeling sad, angry, or stuck in life, it’s important to talk to a real person – like a friend or family member. Reach out to a counselor or a helpline if things are serious. AI cannot replace genuine human care and understanding. You should never ask questions related to someone’s relationships, personal affairs, or sensitive topics.
Read also: Panama Canal: History of Its Construction and Basis of U.S. Claims
Spreading disinformation or hate speech
I even thought about putting this point at number 1 or 2 – it’s that important in today’s world – but I’ll leave it for last so it sticks in your memory better.
I’ve already said it more than once: AI tools learn from every piece of information you enter – but not every bit of data gets checked by a human before the bot uses it further. So never ask AI to spread hate speech. And this also applies to conspiracy theories, disinformation, and other controversial topics. The concept of “garbage in, garbage out” absolutely applies to chatbots – they only know what people feed them.
Chatbots are built on data and work by predicting and generating responses based on what they know.
If you tell a chatbot something completely false or misleading, it may give you inaccurate information. Instead of trying to trick the system, use chatbots to get real, useful, and accurate answers. Trying to “test” a chatbot with falsehoods, by the way, can also lead to confusion. AI systems can unintentionally spread false information or harmful content, so it’s absolutely essential to fact-check with reliable sources.
Do not try to cheat or hack AI
There are people who see themselves as “super smart” and “test” chatbots with tricky questions or try to get the “machine” to say things it shouldn’t. Like deliberately nonsensical questions or attempts to bypass a chatbot’s safety rules.
This doesn’t benefit anyone. It spreads false information or creates problems for others. Use AI for learning and fun. But don’t break its rules. Remember, AI systems are tools designed to help and empower people, but they have their limits. It’s important to use AI responsibly, respect ethical boundaries, and engage critical thinking as often as possible.
It’s worth mentioning separately that chatbots are usually designed with filters that prevent the use of offensive words. But, of course, this doesn’t rule out the fact that some people try to make chatbots use bad words or ask them to tell offensive jokes. This should never be done. Just as you wouldn’t speak disrespectfully to other people, it’s important to maintain a respectful and kind conversation. Even though these chatbots don’t feel pain, it’s important to develop the habit of speaking kindly and politely.
Read also: Goodbye, “You”: Review of Final Season of “Romantic Killer” Story (No Spoilers)
Predictions of future success
AI cannot predict the future (yes, don’t laugh, some people think it can), so never rely on it when it tells you which numbers to pick in the lottery, which stocks to buy, or how to build personal relationships. I will keep reminding you, chatbots only know what people have told them, and some models work with data that’s years old. They don’t even know the recent past, let alone the unknown future.
AI chatbots are smart, but sometimes they can make mistakes too. Occasionally, they provide outdated or incorrect information, or they might not even understand what you’re asking. It’s always helpful to double-check what the chatbot says, especially when it comes to a learning task or an important decision.
How to train AI correctly
Remember, these chatbots exist to help you do simple things and learn easily, not to replace experts or your personal judgment. Just get into the habit of politely and kindly asking ChatGPT to perform a task, then refine it with further prompts. You can save a lot of time and get better results if you give ChatGPT the instruction: “Ask me any clarifying questions about my request before giving an answer.”
Then ChatGPT will ask you how exactly you want your request to be handled, prompting you to answer a few relevant questions. One of ChatGPT’s many strengths is its ability to provide tailored advice, and you can use it to generate recommendations for various things, such as books, movies, and so on, based on your preferences – preferences that ChatGPT can learn.
Speaking of tastes, one thing that can be especially helpful for you right now, especially if you’re reading this article after a long workday, is ChatGPT’s ability to recommend tasty recipes based on the ingredients you have on hand in your kitchen. Just list the ingredients you have and ask ChatGPT to suggest a quick dish idea. For example, “I have chicken, mayonnaise, spices, carrots, tomatoes, onions, and oil. Can you give me a recipe based on these ingredients?” It will provide you with step-by-step instructions and save you from having to flip through recipe websites or cookbooks for inspiration.
Read also: Everything You Need to Know About NVIDIA DLSS 4.0 and Reflex 2: What They Offer and Why They Matter
How to use AI safely
To sum up, what would I say in conclusion? Here are three things you can do to use chatbots safely and protect your privacy:
- Be careful with the information you provide, request, discuss
- Read the privacy policy and find the chatbot’s privacy settings
- Refuse to allow your data to be used for training language models, if any.
Overall, using incognito/privacy modes, clearing conversation history, and adjusting data settings are the main ways to limit data collection by AI-based chatbots. Most major chatbot providers offer these features.
OpenAI (ChatGPT, GPT-3):
- Use incognito/private browsing mode
- Turn on ‘Do not save history’ in the settings
- Clear your call history regularly
Anthropic (Claude):
- Enable the Privacy Filtering setting to prevent data collection
- Use the ‘Incognito mode’ feature
Google (Bard, LaMDA):
- Use guest mode or incognito mode
- Review and adjust your Google data settings.
Given these simple recommendations, you should have a lower chance of encountering issues and can expect a safer experience when interacting with AI-based chatbots. Be polite, stay relevant, and always prioritize your privacy and security.
AI is advancing quickly, but it still lacks the emotional and moral development of humans. It remains inexperienced, so when teaching it something, do so responsibly. It is within our power to guide it toward intelligence, kindness, and wisdom. Avoid these mistakes, act with goodwill, and perhaps one day Skynet will remember you and show mercy.
Read also:
- Space Travel at the Speed of Light: When Will It Become a Reality?
- Use It or Lose It: How AI is Changing Human Thinking