InkBridge Networks - A new name for Network RADIUS

AI intrusion detection

When pattern matching works (and when it doesn't)

AI can spot network intruders brilliantly, but it can also create more security disasters than it's preventing. 

By Jana Sedivy, VP of Customer Experience and Product

Yes, AI is transforming network security, but many AI security promises are overblown nonsense.  

I've watched plenty of technology trends come and go over two decades working in network security and authentication. Some deliver on their promises. Most fade as quickly as they appeared.  

AI sits somewhere in the middle - revolutionary in certain applications, catastrophically dangerous in others. The trick is knowing which is which. 

AI excels at finding statistical patterns in data, but you can't base network security on “it’s statistically correct”. The same technology that can spot anomalous network behaviour will still generate insecure code. Understanding this paradox is the difference between a secure network and a data breach waiting to happen. 

The vibe coding security crisis  

Let's start with where AI creates security disasters. Vibe coding refers to the practice of rapidly generating applications using AI assistants without deep technical expertise. Someone describes what they want, an AI generates the code, and boom - application deployed. 

Sounds convenient, right? It's actually a cybersecurity nightmare. 

When you vibe code applications, AI doesn't automatically include data protection around what you're building unless you specifically ask for it. And if you're vibe coding, you probably don't know what best practices to request.  

The Tea app disaster illustrated this perfectly. It was a viral application where women could discuss bad dates and warn others about men to avoid. To prove you were female, you had to upload your driver's licence photograph. 

The creators promised to delete all this data. They didn't delete the data. In fact, they didn't even implement the most rudimentary protections around it. When 4chan discovered the vulnerability, they took screenshots of hundreds of women's driver's licences along with all their private messages. Complete disaster. And that sort of thing is just going to keep happening. 

I spoke with an entrepreneur recently who proudly told me he'd vibe coded an entire booking application for private car services in 15 minutes. I asked if it was secure. He assured me it was just a simple booking system. Then I asked: "Are you storing people's credit card numbers?" Long pause. "Hmm, yeah." 

This isn't an isolated problem. This is systemic. 

​The slopsquatting vulnerability 

There's another dimension to this that's even more insidious: slopsquatting. You've probably seen AI slop - the garbage output that AI systems generate when they hallucinate or make things up. When you use AI to code, it sometimes suggests importing Python packages that don't exist. 

Malicious actors can then create packages with those hallucinated names and publish them. When other unsuspecting developers use AI that recommends the same fictitious package, they download and install malicious code directly into their systems. 

When you use AI to automate normal IT tasks, it makes mistakes based on similar hallucinations.  Do you really want to automate an AI taking your entire network down?  You wouldn’t give a skilled junior IT person unlimited access to your entire network.  So you shouldn’t give AI systems that access, either. 

As a practice, vibe coding has serious pitfalls. In the rush to embrace all the wonderful things AI is supposed to bring, there's been a shocking level of inattention to basic security. Many organisations perform rigorous security vetting before allowing any application into their ecosystem. But for some reason, with AI tools, they've just said "here's some AI, go nuts" and opened the doors. CISOs are starting to realise this and trying to put fences around it, but that doesn't address the problem of developers creating applications and releasing them to the public. 

As a user, you have no idea if your data is going to be secure or not. Legislatively, there aren't many protections. There aren't even requirements to disclose when you've had an incident. We only heard about the Tea app because independent journalists at 404 Media picked it up. (Fantastic tech reporting, by the way - I support them through their subscription option.)  

There's probably a lot more we don't know about. For consumers, this is genuinely frightening. I don't want to install applications now because I simply don't know what's happening with my data. 

The junior developer problem 

What makes vibe coding particularly dangerous is when junior developers use it. They don’t have the knowledge of senior engineers who've put years into learning and testing.  

Junior developers who use code delivered by AI have a hard time recognising what's good and what isn't. They look at insecure code and think "that's exactly what I would have done" and push it to production. At InkBridge Networks, we limit the use of AI by our junior developers.  We want them to learn the old-fashioned way - making mistakes, realising afterwards "I should have done it this way," and building genuine expertise.  AI is a tool which can help them learn.  It should not be a tool which they use to automate away their jobs.  It should not be a tool which lets them avoid critical thinking. 

There's real value in learning a craft properly. As long as you have oversight from senior engineers who can do peer reviews, you're building capability. We think it's very short-sighted to try to replace junior developers with AI.  

It's a much better practice to train juniors traditionally, make sure they learn the foundations, and then as they become more senior, they can start using these tools effectively. AI is a tool like everything else. There's nothing magic about it. 

Why AI data analysis is fundamentally different 

Now here's where AI actually works: data analysis.  That is, log analysis, intrusion detection, or other ways to find correlations in data that you didn’t know were there.  AI isn’t smarter or more advanced in this application; it’s just that this analysis is fundamentally a different kind of problem. 

AI runs on data - massive amounts of it. It uses terabytes of information to build statistical models that can make predictions or identify patterns. For AI in network management, this creates an insurmountable obstacle: do you have terabytes of data about your network? For something like authentication and access control, there simply isn't enough data to do anything useful with AI. 

But data analysis is different. You can use AI for this analysis because you can detect patterns in your own network traffic, identify what looks unusual, and surface those anomalies to a human who can investigate more thoroughly. 

This is where we need to be precise about what AI is actually doing. Modern AI, as we commonly think of it today, is primarily statistics. There's a rich history of AI that encompasses other approaches, but the large language models and pattern-matching systems everyone's excited about right now are fundamentally statistical analysis at scale. 

For secure network management, there is a big difference between policy and statistics.  Policies are clear, factual rules based on real-world and verified data: 

  • People from foreign countries can't sign into our VPN. 
  • Only authenticated devices can access this network segment.  

These aren't pattern-matching problems. They're policy enforcement problems that require human-defined rules. 

Data analysis however, is exactly the kind of statistical pattern-matching problem where AI excels. You're looking for deviations from normal behaviour:  

  • This traffic pattern looks wrong. 
  • These login attempts seem suspicious. 
  • This data transfer doesn't match established patterns.  

AI can process massive volumes of network traffic data and flag anomalies that would be nearly impossible for humans to spot in real time. 

The critical difference is that AI is identifying potential problems, not making security decisions.  

A human security analyst still needs to investigate flagged incidents, determine if they're genuine threats, and decide on appropriate responses. AI is a detection tool, but it shouldn’t have autonomy. 

There have been many news stories about AI systems flagging the wrong person, or the wrong behavior.  This is because AI systems are statistical, they are probably correct, not provably correct.  The result is that you still need people to validate the output of AI.  You still need to train your people to double-check the output of AI before they make any decision based on the behaviour that it flags. 

Real-world success: how telcos use AI for network monitoring 

I've seen this work in practice with telecommunications carriers. Companies are using AI to perform health checks on their networks by analysing traffic patterns.  

When they detect unusual behaviour - something weird happening with traffic  - they can identify the problem with remarkable precision. In some cases, they can determine "this is a failing cable" and dispatch a technician to fix it before customers even notice a drop in quality of service. 

This is enormously valuable. The Wireless Broadband Alliance's 2026 Industry Report found that roughly 50% of consumers experience connectivity failures at least once monthly, and approximately $70 billion is lost annually to customer churn. When carriers can detect and resolve network issues proactively, they prevent service degradation that drives customers away. 

Why does this work when so many other AI applications fall short? Because large-scale telecommunications networks generate massive volumes of data - exactly what AI needs to function effectively.  

Every packet transmission, every connection request, every quality-of-service metric creates data points. Aggregate that across millions of users and you have the statistical foundation for meaningful AI analysis. 

Compare that to authentication systems. For a typical enterprise network, you might have thousands of authentication attempts daily. That sounds like a lot, but it's not nearly enough data to train effective AI models. And even if you could aggregate data across multiple organisations, you'd face insurmountable privacy and legal barriers. Nobody's going to share their authentication logs with a third-party AI service, and rightly so. 

After incidents like the CrowdStrike outage that affected millions of systems globally, companies are increasingly (and correctly) hesitant to grant external security vendors broad access to their networks. When you're locking down access, AI systems don't have the data they need to operate effectively. 

Where AI systems still falls short 

Even in ideal use cases like network monitoring, AI isn't a complete solution. You still need traditional firewalls and intrusion detection systems running alongside AI-powered analysis. Signature-based detection remains essential for identifying known attack patterns and established threat indicators. 

AI data analysis only becomes useful after you've implemented proper security foundations. That means VPNs, firewalls, strong authentication, and comprehensive audit logging. If your network is wide open, you don't need AI to analyse network security – you already know that your network is insecure. 

Once you've laid the foundation (secured logins, implemented VPN access, and established proper logging) then you can analyse that data with AI to detect patterns that would be difficult for humans to spot. But AI is the final layer. 

Consider, too, the accuracy problem. While 99% accuracy might be fantastic for content generation, that 1% error rate becomes catastrophic in security contexts.  

You can't run a corporate firewall on "mostly pretty good." The problem of AI hallucination - where systems confidently generate plausible but entirely fictitious information - is particularly troubling in security contexts. 

An AI can confidently report that a network is secure while missing critical vulnerabilities. Or it can hallucinate security vulnerabilities, sending you on a wild goose chase for something that doesn't exist. In either case, the consequences can be severe. This is why human oversight remains absolutely essential, even for AI applications that genuinely add value. 

For smaller networks, AI analysis might not be appropriate at all. If you don't have sufficient historical data to establish baseline behaviour, AI systems will generate excessive false positives because they can't accurately distinguish between normal and anomalous activity. You need both scale and established patterns for AI to function effectively. 

The paradox resolved: pattern recognition isn’t security architecture 

AI is a tool. Used properly in the right contexts, it can enhance security operations. Used carelessly for applications it's not suited for, it creates vulnerabilities. The key is having the expertise to know the difference. 

If you're evaluating AI security solutions for your network, ask hard questions: 

  • Where is the data coming from?  
  • How much data is needed for effective analysis?  
  • How does the system handle uncertainty?  
  • What happens when the AI makes mistakes?  
  • Is this solving a problem that couldn't be addressed with simpler, more reliable approaches? 

Sometimes the most innovative approach means steering clear of the latest technology and applying proven principles with discipline and precision - knowing when the shiny new tool actually solves real problems versus when it's just another source of risk. 

Need help? 

InkBridge Networks has been at the forefront of network infrastructure for over two decades, tackling complex challenges across various protocols. Our team of seasoned experts has encountered and solved nearly every conceivable network access issue. For network authentication built on 25 years of expertise rather than statistical guesses, get in touch.  

Related Articles

AI in network management: A hard look at real-world limitations

AI in network management: A hard look at real-world limitations

Today, AI sits at the peak of the hype cycle, but AI in network management faces fundamental challenges that the industry seems reluctant to acknowledge. While it's revolutionizing certain fields, network security isn't necessarily one of them—at least not yet. 

Enterprise ransomware prevention starts with network authentication

Enterprise ransomware prevention starts with network authentication

When properly implemented, network authentication can serve as a powerful barrier against ransomware attacks, stopping bad actors before they gain the initial foothold they need.