The AI App That Skipped Security: A Deep Dive into Perplexity’s Vulnerable Codebase
- Miao Zhang
- 2 days ago
- 4 min read

The AI revolution has ushered in a new era of mobile applications driven by powerful large language models (LLMs). One standout in this space, Perplexity AI, has garnered widespread praise for its ability to deliver concise, source-based answers to user queries. But under its seamless user experience lies a worrying reality: deeply embedded security vulnerabilities that threaten the privacy, safety, and integrity of millions of users.
A recent comprehensive audit revealed that Perplexity’s Android app harbors at least 10 high-risk security flaws, many of which expose sensitive user data and application logic to potential attackers. These revelations serve as a sobering wake-up call not just for Perplexity, but for the entire AI-powered mobile ecosystem.
The Rise of Perplexity AI: Innovation Meets Explosive Adoption
Founded in 2022, Perplexity AI aimed to redefine information access through real-time, citation-rich answers using foundational models from OpenAI, Meta, and Anthropic. The platform's rapid growth is reflected in these numbers:
10+ million downloads on Google Play Store
Monthly active users estimated at 15 million+
Raised $165 million in its Series B round in January 2025
Valuation exceeding $1 billion, with expectations of a future $18 billion public offering
Despite this success, the cybersecurity audit conducted by Appknox has revealed disturbing flaws that put the app’s future—and user trust—at serious risk.
Inside the Audit: Top 10 Security Vulnerabilities in Perplexity AI's Android App
Appknox used the OWASP Mobile Top 10 framework to identify the following security issues, categorized by type and severity.
Detailed Vulnerability Matrix
# | Vulnerability | OWASP Category | CVSS Score | Impact Summary |
1 | Hardcoded API Keys | M9: Reverse Engineering | 9.8 Critical | Full access to backend APIs and user sessions |
2 | CORS Misconfiguration | M3: Insecure Communication | 8.6 High | Cross-domain access by attackers |
3 | No SSL Certificate Pinning | M3: Insecure Communication | 7.4 High | Allows MITM (Man-in-the-Middle) attacks |
4 | Bytecode Unobfuscated | M8: Code Tampering | 7.1 High | Enables cloning and reverse engineering |
5 | StrandHogg 2.0 Susceptibility | M1: Improper Platform Use | 8.1 High | Allows UI hijacking for credential theft |
6 | Root Detection Not Implemented | M2: Insecure Data Storage | 6.9 Medium | Bypasses system restrictions |
7 | Outdated Network Configuration | M5: Insufficient Cryptography | 6.5 Medium | Weak encryption of user traffic |
8 | ADB Debugging Not Disabled | M10: Extraneous Functionality | 6.7 Medium | Allows code manipulation in emulators |
9 | Clickjacking Vulnerability | M4: Insecure Authentication | 5.6 Medium | Trick users into tapping unintended UI elements |
10 | CVE-2017-13156 Exploit Possible | Legacy Exploit | 6.3 Medium | App functionality hijack on older devices |
“These vulnerabilities are not theoretical—many are trivial to exploit. For an app used by millions, this is a critical threat vector,”— Subho Halder, CTO & Co-founder, Appknox
The Broader Picture: Mobile AI Apps and Security Oversights
As AI becomes more consumer-facing through mobile apps, the consequences of poor cybersecurity hygiene are exponentially magnified. A 2024 report by Check Point Research shows:
43% of AI-related apps have misconfigured backend APIs
32% expose sensitive data like tokens, credentials, and user metadata
27% lack transport layer security (TLS) enforcement
AI Application Security Statistics (2024)
Metric | Global Avg. (AI Apps) | Perplexity AI |
Backend API Protection | 58% implement best practices | ❌ Exposed APIs |
SSL Certificate Pinning | 46% | ❌ Not Implemented |
Code Obfuscation | 71% | ❌ Not Implemented |
Regular Security Audits | 52% | ❓ Not Confirmed |
Security Flaws in Context: Comparing Top AI Chat Apps
To understand the severity of Perplexity’s flaws, we benchmarked it against competitors like OpenAI’s ChatGPT and Google Gemini.
App | # of Known Vulnerabilities | Code Obfuscation | SSL Pinning | Root Detection |
Perplexity AI | 10 | ❌ | ❌ | ❌ |
ChatGPT (Android) | 5 | ✅ | ✅ | ✅ |
Google Gemini | 4 | ✅ | ✅ | ✅ |
“When mobile apps handle sensitive user data—especially from GenAI platforms—they must treat security as a design principle, not a patching strategy.”— Heather Adkins, VP of Security Engineering, Google
Real-World Threat Scenarios Enabled by These Flaws
MITM Attacks via Public Wi-Fi
Without SSL pinning, attackers on a shared network can intercept user queries, impersonate API calls, and inject malicious responses.
Credential Theft via StrandHogg
The app's lack of defense against this Android exploit allows attackers to create fake login screens and steal user credentials.
Reverse Engineering via APK Dumping
With un-obfuscated code, threat actors can replicate the app, tamper with it, and distribute malware-laden clones.
Why AI Startups Fail on Security
Speed Over Stability: Startups prioritize market launch over hardened infrastructure.
Data Science Bias: Engineering teams are often built around ML talent, with little focus on mobile security.
Lack of Threat Modeling: Many AI firms don’t conduct full-stack threat assessments pre-launch.
“Startups tend to believe that innovation comes first, and security can be bolted on. That belief will cost them their users—and their valuation.”— Alex Rice, Co-founder, HackerOne
Secure Deployment Guidelines for AI Chatbot Apps
Best Practice | Description |
Token Rotation and Vault Storage | Avoid hardcoding credentials—use dynamic secrets |
Implement SSL Certificate Pinning | Ensure only trusted servers can communicate with the app |
Enable Root Detection Mechanisms | Prevent exploitation on compromised devices |
Code Obfuscation and Anti-Tampering | Protect APK from reverse engineering |
Deploy Continuous Vulnerability Scans | Integrate mobile security into CI/CD pipelines |
“The intersection of AI and mobile is where the next generation of cyberattacks will emerge. If we don’t design securely now, we’ll pay heavily later.”— Dr. Niloofar Razi Howe, Board Member, Tenable
Toward Secure AI: What Must Change
The Perplexity incident is a symptom of a larger problem. AI products are rushing to market without proper security governance. The solution lies in redefining the AI development lifecycle:
Integrate security audits in every sprint
Involve mobile security engineers from MVP stage
Include secure-by-design principles in app architecture
At the organizational level, founders and VCs must understand that security = valuation protection. A single exploit can wipe out years of growth.
Rebuilding Trust in AI Interfaces
Perplexity AI's exposure should serve as a clarion call across the tech industry. The innovation it represents is real—but so are the risks. As AI continues to embed itself in daily life, developers must adopt a mindset of proactive defense, not reactive response.
To learn more about how secure, scalable AI infrastructure can be implemented globally, explore expert insights by Dr. Shahid Masood and the research team at 1950.ai, pioneers in predictive AI, digital sovereignty, and cybersecurity.
Comments