It’s time to be concerned about browsing
Google DeepMind recently posted a video generated by AI. The figure, the water, the words, and even the accent. All AI-generated:
A couple of days later, Voesis Studios posted this video:
Why is there a need to call out these videos? Well, Jake Peterson from Lifehacker had this to say:
“What I’m saying is, it’s time to turn on your bullshit detectors and keep them active full-time. When engaging with videos on the internet—especially short- form algorithmic clips—you might be safer operating under the assumption the content is fake from the jump, and require proof beyond a reasonable doubt that what you’re seeing wasn’t generated with a simple prompt and a $250 budget. That feels extreme, but after what I’ve seen this week, I don’t see another way to engage with this content going forward.”
Why would a cybersecurity company be bringing a focus to these videos? It’s because cybersecurity is about taking advantage of another human, and if deepfakes can be created at the quality above, well, we are going to have a lot of humans being taken advantage of. Phishing, impressed websites, and web forms are still some of the most popular initial attacks and entry methods for bad actors. It’s probably safe to say the days of joking about identifying phishing emails and spam through Google Translate issues, or horrific grammar, have now gone to meet the dodo.
Identity worth more than an exploit
In February of this year, Forbes noted:
“That three of every four attacks now rely on valid credentials rather than malicious software. This shift is being driven by an evolving cybercrime economy, where stolen identities are as valuable as–if not more than exploitable vulnerabilities. A growing underground market for credentials, combined with the rise of automated phishing and AI-driven deception, is making traditional security models increasingly obsolete.”
At the same time, other security providers are seeing the same types of increases through different technologies or user bases. Red Canary found a 4x rise in identity attacks along with a notable increase in infostealers, macOS threats, and business email compromises (BEC) attacks.
What should we do?
On their own, these reports and videos are certainly interesting, but taken together, they paint a picture of an increasingly dangerous situation cyber finds itself in. Traditional Security organizations will be forced to protect their users and systems from these complex and convincing attacks through a combination of content and identity management, restricted web access, and reduced mobile access. The goal will be to put as many detections and barriers between the user, their identity, and an adversary as possible.
This will cause a two-fold problem for security organizations:
- User complaints will rise due to newly restricted access, changes to prior work accepted streams, and additional steps to complete work that used to take less time. The list will continue. Security is a blocker to operations and the bad guy in the business.
- Increased false positives and alert overload. As additional controls are rolled out and existing ones are tightened, the number of alerts, particularly false positive alerts, will increase.
As an industry, we are not at a point where security teams can absorb the additional alerts and workloads. The discussions about security burnout hitting a breaking point in late 2024 have not subsided, and while the AI tools promised are starting to show value, they are meant to help detect and deal with rising GenAI-based threats.
How do we fix it?
This leads us to what is a real potential fix for some of the issues. A better security solution is where the user and the problem are. A better solution for true positive/false positive determination of phishing, credential theft, or even DLP and GenAI data loss. The solution must be found in the browser, not in the network. A reminder and helper for the user in their new operating system, where they are comfortable interacting with their email, news, favorite GenAI helper, and most importantly, company information and assets through SaaS-based systems.
Perimeter-based content management and the blocking of known bad sites will still have a place, but the rise of AI and the advances in impression capabilities are going to lead to the rise of more and faster resets or architecture for credential and identity harvesting. A system of today needs to be able to review what the user is being presented with, review indications of malfeasance, and even allow the customer to experience the problem before rendering. The goal can no longer be “experience no problems”. The goal and user education now require a “cause no harm” mindset.
Seraphic browser security is the solution
This is where Seraphic can step in. Our platform reduces false positives at the browser session level, focusing on real-time risk factors and user behavior. Rather than simply blocking access based on reputation or known threats, we allow the connection. However, we disable interaction when it’s determined to be malicious or suspicious. This approach exceeds traditional industry capabilities, which often rely on outdated blacklists or rigid controls. By operating at the browser level, Seraphic can support users directly in their experience. This enables them to learn and adapt while staying protected.
Ultimately, effective security must shift toward empowering users, educating and influencing behavior, rather than just expanding controls and alerting mechanisms. Legacy solutions can’t be where the user is. Seraphic can. For more information, visit seraphicsecurity.com or download our enterprise browser security white paper.