
Behind every tap, mobile apps leak more than they admit. Opaque SDKs and embedded AI move data off-device; shifting the burden to developers and making privacy testing routine is how brands earn trust back.
getty
Mobile is where customers live. It’s also where privacy goes missing.
I spoke recently with NowSecure CEO Alan Snyder and he didn’t mince words: “The mobile app is the best surveillance tool ever created on the planet.” That sticks because it’s true on both sides. Users trade convenience for exposure. Companies trade speed for visibility.
“Success in any industry often depends on the ability to create software applications that can be accessed and used from any device. When applications can be easily accessed and used from mobile devices, it can drive more business and monetary transactions,” emphasized Melinda Marks, practice director for cybersecurity at Enterprise Strategy Group. “However, mobile security is often overlooked, so hackers often target vulnerabilities in mobile applications.”
The message is simple: mobile growth is good for business, which makes it a magnet for attackers—and a stress test for trust.
The gap shows up in small choices that add up: permissive defaults, rushed releases and libraries dropped in without review. Teams intend to do the right thing. Then deadlines hit, SDKs update in the background and disclosures drift away from reality.
What you end up with are apps that feel helpful but quietly spill data to places most people never see.
What the data shows—and why it’s growing
Recent testing paints a pattern. Many iOS and Android apps handle sensitive data and call tracking domains. That doesn’t automatically mean abuse, but it does mean data is moving—farther and faster than most risk teams realize. According to research from NowSecure, a big chunk of iOS apps fail to declare what they collect. Many lack a primary privacy manifest. Almost all are missing the required manifests for third-party SDKs, which is where much of the behavior lives.
Snyder’s team has pressed on the mismatch between claims and code. He shared with me that the bad news is their analysis found over 90% of those attestations are wrong. It’s often not malice; it’s blind spots. Developers know their code. They’re less sure about a changing pile of third-party components.
AI is accelerating the problem. Almost one in five of the 183,000 apps reviewed use some form of AI. Thousands send data to external AI endpoints. That adds new data flows, new vendors and new risks. When AI hides inside SDKs as well as first-party code, even basic questions get hard: What leaves the device? Where does it go? How long is it kept?
Who owns the fix—and how to start
“The burden of this should not fall to the consumer,” stressed Snyder. “The burden of getting this right should fall to the app developer.”
NowSecure CMO Jon Brody makes the brand case the same way. He shared that trust isn’t a tagline; it’s proof. You earn it by showing what the app actually does, not by promising what it should do.
Marks frames the operational answer: “It is important for security teams to take a proactive approach to mobile application security with the right tools and processes incorporated into development workflows to help them release secure mobile applications.” Proactive means shifting from paperwork to evidence, and from one-off audits to continuous checks.
Start simple and make it measurable.
First, minimize permissions. If you don’t need precise location, don’t ask for it. Fewer permissions shrink the blast radius and reduce the places data can leak.
Second, map every outbound connection. Label each destination as first-party, SDK vendor, ad/analytics network, or AI endpoint. If the list is long, ask why. If a new domain appears in a build, treat it as a change request that needs a reason.
Third, reconcile behavior with disclosures on every release. If a new version toggles on microphone access or adds contact uploads, that should trigger human review. The goal is boring alignment: the store page and the real app match.
Fourth, govern SDKs like a supply chain. Approve them, retest them and document why they’re in the build. Replace libraries when their behavior drifts or their vendors won’t provide clarity. Don’t ship what you can’t explain.
Finally, make privacy testing part of the pipeline. Composition analysis and CVEs matter, but mobile risk is also behavioral. Add automated tests that observe data use at runtime, flag risky flows and fail builds when something changes. Treat AI endpoints like any other processor: contracts, controls, monitoring and kill-switches.
Regulatory pressure is also rising—GDPR, state privacy laws and general industry sector rules. “We didn’t know the SDK did that” won’t help after the fact. The better posture is practical: collect less, keep less, share less. Prove it with evidence you can hand to auditors, customers and your own execs.
The road ahead
The line around “mobile” is blurring. Apps are spreading into cars, TVs and sensors. AI agents will talk to other software without a human in the loop. That multiplies data flows and makes paperwork alone useless. Teams that build continuous visibility now—especially around SDKs and AI—will be ready for that world. Teams that wait will be guessing.
None of this requires heroics. It requires discipline. See what the app really does. Reduce it to the minimum. Make the disclosures match. Keep checking as code and components change. Do that, and you protect users, your brand and your right to keep showing up in the app stores.