40% of AI-Generated Code Is Vulnerable. How to Protect Yours!

AI can write code within a few minutes, but it can also get hacked just as fast!
AI coding tools like Cursor and ChatGPT are reshaping how we build software. Instead of wrestling with complex syntax for hours, developers now have conversations with their editors, and code materializes. But this ease of use often comes with a cost to security. A recent research concluded that ~40% of AI-generated code had vulnerabilities. The good news was that 55% of the issues found during research had fixes already available!
The question now is: how can we continue to use AI tools without compromising safety? When one uses AI to help build something, they can open up security holes without even realizing it, which can lead to some serious consequences in the future.
Even if one doesn’t have a background in security, it is important to know and follow some basic rules when using AI for your project.
Why should you care about the security of AI code?
AI doesn’t discriminate. Irrespective of who writes the code, every single application put on the internet becomes a potential source of AI training data. And AI doesn’t know what a secure code looks like. It generates what it has seen before. Since the internet is full of insecure code, it is likely that when working with AI, one may end up with code that looks totally fine but is filled with problems. That’s why it’s important that we learn a few simple habits now before things go live.
AI is Helpful, But Not Always Right
LLMs are super intelligent, but like overconfident interns, they can hand you insecure code with total conviction.
You might say: “Hey, build me a login system with a database.” And the AI might hand you back code that:
- Stores passwords in plain text
- Builds SQL queries by smashing strings together leading to injections
- Hardcodes your database password directly in the file
These are all common mistakes, and most AI tools won’t warn you about them. You need to be vigilant and know your stuff.
Things You Can Do Today to Stay Safer
Here are some security meaures that aren’t super technical, but make a big difference:
Don’t Trust Input
If your app takes user input such as forms or URLs, don’t assume people will always use it the way you expect. Not every user is a trusted user.
Verify that your inputs have the right limits set and are sanitized. Check input type, length and content. Don’t allow potentially malicious payloads like
Never Hardcode Secrets
Most apps need API keys or passwords. Make sure to not hardcode them into your code. It might feel easy but you may accidentally publish your repository and someone could mine Bitcoin on your AWS account. Instead, use environment variables. Tools like .env files or hosting service’s secret manager can be helpful. It’s way easier and way safer.
Use HTTPS
Whenever you are talking to something on the Internet, like a server, API, or front end, always use https://
. Not http://
. The S is for “secure” and it means the data is encrypted in transit. Most modern tools are cloud like GCP or AWS provide ‘https’ by default. If you’re building a serverless web app, tools like Vercel, Cloudflare ,and Netlify provide HTTPS for free.
Don’t Run as Admin (Even Locally)
Never run your code with superuser access unless you really need it. One mistake or one malicious library could end up deleting all your files.
Check AI Suggestions Before Copy/Pasting
It’s tempting to just copy/paste the code generated by AI and run with it. But instead take a second to verify and understand it.
- Is the code handling sensitive data?
- Is it doing anything weird with inputs?
- Is it opening files, running commands, or connecting to the internet?
If yes, make sure that you are inspecting that the code is performing the functions exactly as you had planned. You could even ask the AI: “Is this secure?” A little prompting might get you a better version.
Use a Linter or Security Plugin
Use a linter or plugin like ESLint (for JS) or Bandit (for Python). These can alert you on insecure code patterns.
Platforms like GitHub can also provide you with automated security scanners that will flag for things like if you accidentally commit an API key
Always patch your dependencies
Old libraries often have security issues. Always check dependencies used by AI-generated code for freshness and accuracy. Use security scanners like OWASP dependency check to verify there are no vulnerabilities in the dependencies. Always use popular dependencies and update them regularly. A recent study found that LLMs actually hallucinate 20% of non existent dependencies.
Security Mindset, Not Paranoia
Think of security as a mindset and not paranoia. A very common phrase used by security professionals is ‘Trust but verify’ which applies to AI-generated code too. Threat model your software! Imagine the worst-case scenario – can any user delete another user’s data? Could your software be used to spam other people? Again – leverage AI to come up with edge cases and worst-case scenarios.
Try prompts like:
- “What are the security risks to consider when building a chat app that stores messages in a database?”
- “Can you help me with a secure signup flow?”
It becomes the starting point and a great brainstorming partner. Remember: AI is still just guessing. The final responsibility is with you. Use this prompt, to review your entire codebase, with your LLM of choice.
Your 30-second security checklist
- Check Validate inputs (never trust user data)
- Don’t hardcode secrets — use environment variables
- Always use HTTPS for APIs and sites
- Don’t run your app with admin powers
- Double-check what AI tools suggest
- Use linters or basic security tools
Patch your libraries Also leverage general security best practices when writing security software. Refer to the OWASP top 10 or similar common code weaknesses. If you get these basics right, you’re already ahead of most indie apps out there. Good luck coding!