Be Careful With the Data You Give DeepSeek… and Every Other AI
DeepSeek When it went public on the App Store a few weeks ago, it rocked the technology world and financial markets and promised to offer the same high-performance AI models as those like OpenAI and Google.
But some in government and data security fear that the sudden popular open source AI assistant’s connection to China could put U.S. data at risk, compared it with social media platform Tiktok, a member of Congress, who is a member of Congress. The vast majority voted last year.
These issues are not limited to DeepSeek. They are all things that everyone downloads the AI Chatbot app to their phones, even in the legislative halls where the national security flag is waving and waving national security. We will outline some useful tips below.
A pair of U.S. House members on Thursday Announcement of plans to introduce legislation This would prohibit the use of the app on all government devices, citing the ability of the Chinese Communist Party to access data collected by DeepSeek and other Chinese-owned applications, and the potential for DeepSeek to be used to spread false information in China.
“This is a five-alert national security fire,” New Jersey Democrat Josh Gottheimer said in a statement. National security is risky.
“We’ve seen Chinese scripts with Tiktok before, and we can’t happen again,” Gottheimer said.
Australia banned the app last week On government equipment. Some U.S. states do the same thing, Texas It’s the first one. and New York Governor Monday Statewide DeepSeek ban is issued on state government equipment and systems.
DeepSeek’s connection to China, and its popularity in the United States and the buzz around it, can be easily compared to Tiktok, but security experts say that while DeepSeek’s data security threats are real, they are compatible with those threats. are different, but they are different from it. Social media platform.
Although DeepSeek may be a Hot New AI assistant right now, there are still a large number of new AI models and versions, so it is important to be careful when using any type of AI software.
Meanwhile, Dimitri Sirota, CEO of BigID, a cybersecurity company specializing in AI security compliance, said that at the same time, it will be a tough sell for ordinary people to avoid downloading and using DeepSeek.
“I think it’s tempting, especially for things in the news,” he said. “I think, in a way, people just need to make sure they run in a range of parameters.”
Why are people worried about DeepSeek?
Like Tiktok, DeepSeek has connections with China and user data is also sent back to the country’s cloud servers. Just like Tiktok owned by Chinese orcs, Chinese law requires DeepSeek to hand over user data to the government if the government requires it.
With Tiktok, lawmakers on both sides of the aisle were worried that the Chinese Communist Party could use U.S. user data for intelligence purposes or could modify the app itself to flood U.S. users with Chinese propaganda. These concerns ultimately prompted Congress Passed the law that banned Tiktok last year Unless sold to buyers that U.S. officials think are appropriate.
However, processing DeepSeek or any other AI is not as simple as forbidding applications. Unlike Tiktok, companies, governments and individuals can choose to avoid it, DeepSeek is something people may end up encountering and hand over the information to the information without even knowing it.
Sirota said average consumers may not even know the AI model they interact with. Many companies have run multiple AI models, and the “brain” or specific AI models that Avatar powers Avatar can even be interchanged with another in the company’s collection, and consumers interact with it, depending on which tasks Need to be done.
Meanwhile, generally speaking, the buzz around the AI is not released anytime soon. More models from other companies, including some open source models like DeepSeek, are also on the way and will surely attract future attention from companies and consumers.
Therefore, focusing on DeepSeek eliminates only some data security risks, said Kelcey Morgan, senior manager of product management at Rapid7.
Instead of focusing on the current high-profile models, companies and consumers need to figure out how much risk they want to take on various AI aspects and implement practices designed to protect data.
“This is any hot spot that will come next week,” Morgan said.
Can the Communist Party of China use DeepSeek data for intelligence purposes?
Cybersecurity experts say China has enough personnel and processing power to mine the vast amount of data collected by DeepSeek, combine it with information from other sources, and possibly build personal profiles for U.S. users.
“I do think we’ve entered a new era, and computing is no longer a limitation,” Sirota said, adding that China has the same capabilities.
Although like Tiktok users, those who play with DeepSeek may be young and relatively unimportant right now, China is happy to play in the long races, waiting for them to grow up to be influential people and worth the potential to be targeted. people. .
Andrew Borene, executive director of Flashpoint, the world’s largest private provider of threat data and intelligence, said people in Washington, regardless of their political inclinations, have become increasingly aware in recent years.
“We know policymakers know; we know the technology community already knows,” he said. “My personal assessment is that I’m not sure American consumers will know what these risks are, or where these data are going and why this is a question .”
Borene stressed that if they choose to use DeepSeek, anyone working in the government should exercise “the highest level of caution”, but he also said that all users should remember that their data may eventually fall into In the hands of Chinese officials.
“It’s an important factor to consider,” he said. “You don’t need to read the privacy policy to know this.”
How to stay safe when using DeepSeek or other AI models
Given that knowing the AI model you are actually using can be difficult, experts say it is best to be careful when using either.
Here are some tips.
Like everything else, please be smart with AI. Here, generally applicable technical best practices. Long, complex and unique settings passwordalways enabled Two-factor authentication When possible, and update all devices and software.
Keep personal information in person. Think about it before entering personal details about yourself into the AI chatbot. Yes, this covers obvious taboos, like Social Security numbers and bank information, but may not automatically cause details of alarm bells, such as your address, workplace, and the names of friends or colleagues.
Be skeptical. Just like you are alert to requests for information that appear in the form of email, text, or social media posts, you should also be concerned about AI inquiries. Sirota said it was considered a first date. If the model asks weird personal questions when it first uses it, go away.
Don’t rush to become an early adopter. Morgan said just because the trend in AI or applications doesn’t mean you have to have it right away. Decide yourself how much risk you have to take when new software on the market.
Read the terms and conditions. Yes, that’s a lot to ask. Borene said the statements could also provide insights on whether AI or applications are collecting and sharing data on other parts of the device. If so, turn off these permissions.
Pay attention to the opponents of the United States. Any app located in China should be suspicious, but apps such as Russia, Iran or North Korea, such as other confrontational or unauthorized countries should also be suspicious. Regardless of the terms and conditions, you may have privacy rights in places like the United States or the European Union.