The Dev is in the Details

Cloud cybersecurity: Vulnerabilities, frauds and limitations | Mateusz Chrobok | The Dev is in the Details #5

April 02, 2024 Lukasz Lazewski
Cloud cybersecurity: Vulnerabilities, frauds and limitations | Mateusz Chrobok | The Dev is in the Details #5
The Dev is in the Details
More Info
The Dev is in the Details
Cloud cybersecurity: Vulnerabilities, frauds and limitations | Mateusz Chrobok | The Dev is in the Details #5
Apr 02, 2024
Lukasz Lazewski

► Is cloud security an illusion?

Join us in this eye-opening episode as cybersecurity expert Mateusz Chrobok reveals the vulnerabilities and limitations lurking within cloud services. From analyzing real-life examples to dissecting the fraudulent tactics used by cybercriminals, this discussion will shed light on the harsh realities of cloud security.

► Our guest 🌟

Mateusz Chrobok 👉 https://www.linkedin.com/in/mateuszchrobok/ 
Cybersecurity, AI and Startup consultant.


► In today’s episode:

  • Proactive measures undertaken by cloud service providers to safeguard the integrity of the cloud and the tangible outcomes of their efforts.
  • Sophisticated tactics employed by cybercriminals in exploiting vulnerabilities within cloud services.
  • Nuanced methodologies to infiltrate cloud systems and jeopardize sensitive data.
  • In-depth analysis and real-world case studies illustrating how vulnerabilities in cloud environments can lead to security breaches and data compromises.
  • Realms of malvertising, fraudulent activities, and the clandestine operations of the darknet.
  • Navigating the delicate balance between individual freedoms and the imperative of security.

► Decoding the timeline:

00:00 – Cloud service providers' responsibilities and vulnerabilities
06:18 – Limitations of the cloud 
09:35 – Opens source vs closed source safety considerations
12:37 – Poisoned language models 
16:03 – Malvertising
20:45 – Darknet operations
23:23 - Freedom vs security on the Internet 
36:49 - Cryptography, AI, and cybersecurity
52:11 - Battling misinformation in the digital age

#AI #cybersecurity #cloud

► Materials and information mentioned in the episode: 

***

The Dev is in the Details is a podcast where we talk about technology, business and their impacts on the world around us.

Łukasz Łażewski 👉 https://www.linkedin.com/in/lukasz-lazewski-40562718/
Pedro Paranhos 👉 https://www.linkedin.com/in/pedroparanhos/
Write to us 👉 podcast@llinformatics.com

Show Notes Transcript Chapter Markers

► Is cloud security an illusion?

Join us in this eye-opening episode as cybersecurity expert Mateusz Chrobok reveals the vulnerabilities and limitations lurking within cloud services. From analyzing real-life examples to dissecting the fraudulent tactics used by cybercriminals, this discussion will shed light on the harsh realities of cloud security.

► Our guest 🌟

Mateusz Chrobok 👉 https://www.linkedin.com/in/mateuszchrobok/ 
Cybersecurity, AI and Startup consultant.


► In today’s episode:

  • Proactive measures undertaken by cloud service providers to safeguard the integrity of the cloud and the tangible outcomes of their efforts.
  • Sophisticated tactics employed by cybercriminals in exploiting vulnerabilities within cloud services.
  • Nuanced methodologies to infiltrate cloud systems and jeopardize sensitive data.
  • In-depth analysis and real-world case studies illustrating how vulnerabilities in cloud environments can lead to security breaches and data compromises.
  • Realms of malvertising, fraudulent activities, and the clandestine operations of the darknet.
  • Navigating the delicate balance between individual freedoms and the imperative of security.

► Decoding the timeline:

00:00 – Cloud service providers' responsibilities and vulnerabilities
06:18 – Limitations of the cloud 
09:35 – Opens source vs closed source safety considerations
12:37 – Poisoned language models 
16:03 – Malvertising
20:45 – Darknet operations
23:23 - Freedom vs security on the Internet 
36:49 - Cryptography, AI, and cybersecurity
52:11 - Battling misinformation in the digital age

#AI #cybersecurity #cloud

► Materials and information mentioned in the episode: 

***

The Dev is in the Details is a podcast where we talk about technology, business and their impacts on the world around us.

Łukasz Łażewski 👉 https://www.linkedin.com/in/lukasz-lazewski-40562718/
Pedro Paranhos 👉 https://www.linkedin.com/in/pedroparanhos/
Write to us 👉 podcast@llinformatics.com

Speaker 1:

There is this illusion that cloud is infinite, but at some point, if you're big enough, you can actually hit the limits. At the end of the day, the fraud is a little bit like sales. So you're having this funnel. I have seen situations people saying hey, I'm having a plugin in the darknet, I'm having a plugin with 10,000 users. I can sell it for $10,000, for example, you can update it and you take over the browsers of the users. Ultimately, from you can update it and you take over the browsers of the users. Ultimately, from my perspective, internet is a dangerous place. The sooner you learn that, well, it's not a great idea.

Speaker 2:

Welcome to the Devs in the Details podcast. Dear listeners, today's guest is Mateusz Robok. With a diverse background to cybersecurity and leadership roles like CTO or CEO, he navigates the realms of innovation seamlessly. His expertise extends to cybersecurity startups and the realm of artificial intelligence. Through his YouTube channel, he generously shares insights into the latest trends, bringing tidbits with these spheres. Mateusz, welcome to the show. Thank you very much for the invitation. It's amazing to have you here, and today's topic is security in and off cloud, so I'm really curious, you know fantastic to have you here to talk about this with us. One of the things I want to start with is everyone has this illusion of security now in the cloud working perfectly, especially by big providers. I wonder what are your thoughts on that. Like, is it true that, because you know Google's of this world or Dropbox or all of these guys provide some sort of file storage? Do they really afford a sense of security and team that guarantees safety for their users, or is it actually an illusion?

Speaker 1:

So it's actually, from my perspective, it's a bit both ways, because they're having fantastic teams that are protecting their users. They're protecting the services, but the complexity of the cloud I mean the configuration of the cloud is so difficult for the companies, for the people that are entering there, that they are actually falling victims to multiple attacks. Like you know, one of the most common attacks last year was related to ransomware in the cloud, which was caused mostly by misconfiguration. There are so many credentials managing identity, multiple integrated services in between there that the people that are not using the cloud are quite easily lost. So I would say they're doing fantastic work on the scale, so they're very good at protecting against the DDoSes. They're having a lot of services that support the security from visibility perspective, like central logging from the network management, like anti-DDoS protection and so on and so on. But overall, people are often falling victims to situations when there are people that misconfigure their services and all of the myth that cloud is more secure is dying with it.

Speaker 2:

But that's because, ultimately, they're launching some sort of a service. Right, Because someone is running, let's say, a startup, ultimately they're launching some sort of a service. Right, Because someone is running let's say it's a startup and they're launching some sort of website or a web application. And you know, they misconfigured the operating system that runs in the cloud and it has too many items open for knocking the.

Speaker 1:

And they're not experienced with that. Like you remember people coming from the metal, I mean from the hardware to the cloud they were okay, this is just a virtual machine, we're going to approach it the same way and you're having every system pop on top and you need more people and more knowledge to manage all of that.

Speaker 2:

True, true, but I would think, you know, because we read only file systems and whatnot right, which immediately kind of bolsters the security. Isn't that the default, or what is your take on that?

Speaker 1:

So out there, right now, we are standing on the shoulders of giants.

Speaker 1:

Everybody is using reconfigured images in Kubernetes.

Speaker 1:

So if you're using Docker images, you shall verify what's out there, but not everybody does so.

Speaker 1:

Dozens of people are using services that are having vulnerabilities in the images themselves, not knowing about them. Then you're adding additional vulnerabilities that your service is providing and at the end of the day, you're having a service that is easy to set up, very scalable, very fancy, because you're running Kubernetes, you're cloud-native and so on, but down there you are prompting certain attacks. It's not always very bad I mean, not all of the vulnerabilities can be exploited in every environment but people are not aware that, using some of the work that is provided by the cloud providers, maintainers of images, they actually have to trust them, and trust them with the reaction time, because sooner or later, the word is continuously flowing and there are more reports related to the new vulnerabilities in the software and you're dependent on a vendor or somebody that is maintaining that, and then you need to upgrade all of your production environment how quickly you can do it when you're going to spot it. That's the whole game of incident response in the cloud, because everything is moving.

Speaker 2:

Right, yeah, sounds a bit like a yeah.

Speaker 1:

I mean that's why I'm saying that my job is forever. I mean there are more activities and there's more jobs than actually people can do, so cybersecurity is fun.

Speaker 2:

Yeah, absolutely. It's really exciting, and would you say that this is still a better route than, let's say, unlimited money erecting their own infrastructure entirely from scratch?

Speaker 1:

So I would say that that requires looking into the use cases. I'm building my own infrastructure in my home for the reasons of testing the local large language models and tracking some things and so on and so on, which will be more expensive in cloud. I'm having some cloud services for things that require the DDoS protection that needs to scale up, and so on and so on. So I believe we are over the hype iceberg when everybody was like cloud, cloud, cloud.

Speaker 1:

Right now people are seeing that, well, it's not always the cheapest. It's not providing all of the things that you wanted to provide, so some of the things that you want it to provide. So some of the people are still using the local infrastructure. The others are looking into hybrid cloud, with emerging some services from the on-prem to the cloud one. I believe, in today's words, one of the most important part is to be able to react quickly to scale up, and there is no other place than cloud to achieve it actually. But if you're looking into various service providers I mean I experienced it with GCP, with Google, but also it happened with Azure in the London, I believe, zone they were overloaded at some point, which caused the customers to be very unhappy when they were in heavier load. So that's a part. There is this illusion that cloud is infinite, but at some point, if you're big enough, you can actually hit the limits.

Speaker 2:

But just to see if I got that right, the overload was not related to security threat right, which is because the cloud was basically scaling fast enough for the number of customers that were serving.

Speaker 1:

Yeah. So if you're out there, let's say, you request another 100 virtual machines or images to be run and so on and so on, and sometimes you're going to be saying that they're going to be responding like, hey, we're not going to have resources or you need an approval, or something like that. So there were situations like when there were some cyber criminals that were actually utilizing situations like that. Let's imagine they take over your account on any Google or AWS or whatever provider and what they were doing they were mining as many instances as possible, as much as you had credit card connected, whatever, and they were mining cryptocurrencies as you had, you know, credit card connected, whatever, and they were mining cryptocurrencies. So if you can spawn an infinite number of machines, it's a protection. I mean, every limit is a protection of some kind to not lose all of the money.

Speaker 2:

I remember a conversation I had with someone from Heroku in 2018 when they set a free plan exactly for that reason, because people were just booting up those instances for free and using them just for a couple of hours before Heroku would kill them anyway, when they were using them for, you know, mine or Bitcoin or something.

Speaker 1:

I mean, I've seen even bots, because out there in the dark you can spot some of the scripts that people were having to set up a phishing pages on Heroku, so they were very short-lived. That was enough for the people to actually spread the campaign. Set up a phishing pages on Heroku so they were very short-lived. That was enough for the people to actually spread the campaign, to get some of the credentials of the people that entered that site and that was very cheap to use for fraud.

Speaker 2:

Interesting, but I've seen those sites. But what always surprised me is that URL in Heroku, unless you pay for a DNS add-on, is ultimately some random bigwhalecom gibberishherokuappcom. So ultimately it's really easy to spot that this is a fake site. But you were saying people were still falling for it or there was another way to mask this maybe.

Speaker 1:

From my perspective, people are falling even more silly things. That was just one of those. At the end of the day, the fraud is a little bit like sales. So you're having this funnel.

Speaker 2:

And some of the people will convert. Got it Okay? Yeah, I'm curious, like when it comes to going back for a second to those images and how things are spinned off in the cloud environment when there is an open source community who creates various Linux distributions specifically configured, pre-configured. Do you know of any case, when there was ever an insider that had prepared a Linux distro and trusted for months or even years and then eventually it actually became Poison because someone would explicitly do some action to ensure that all of these machines, all of that infrastructure that uses them, is actually updating to Poisoned version, so to say. I'm not sure if Poisoned is the right term.

Speaker 1:

Yeah, Poisoned is quite a good term. It reminds me of the poisoning of the large-leg bit models that happened recently. The only case that I have in my mind is related to poisoning the kernel code. One of the researchers in one of the universities I don't recall if that was Minnesota or something like that they introduced the kernel into the mainline. They introduced some vulnerabilities just to check if this is possible, and it went through the review.

Speaker 1:

There are people that are looking into the open source and verifying if it's all right or not, and then they shared with the community. Hey, that was actually a test. We introduced a vulnerability and I can tell you Linux, the godfather of the Linux kernel, was really mad at them. They were banned for some time and so on, but they did it in purpose, to verify if it is possible. Yes, that's possible. Nobody spotted it, but that kind of software is evolving very, very quickly, so it's easy to overlook some of the things. And that proved the point that the closed source software well, you won't even be able to spot it.

Speaker 1:

I don't know if you have seen what happened to Ivanti. That's one of the companies that was providing VPNs and I believe a week or two weeks ago, the Department of Defense and all of the departments in the United States told all of the government agencies to shut down all of the Avanti devices because they were being used and attacked by the Chinese APTs Advanced Persuasive Threats. So yeah, that was like a very heavy game and some of the people were raising like, okay, avanti is closed first. If that will be more open, we can spot it earlier. But yeah, we're going to have the discussion forever, of course, but the counter example is Kernel, right.

Speaker 2:

I remember in the early 2000s, you know, when we were all much younger, people were laughing off Theo from OpenBSD Review and merge. I've report requests myself, right. It was crazy, but if you now give me the example of Linux kernel, it seems like he was on for something right. Yeah, but this is insane because we have to trust someone somewhere, right.

Speaker 1:

That's true, and it's not possible to scale yourself infinitely right. So in my recent findings there was a model, a large language model, called PoisonGPT and out there it was just a model based on open source technology that you can find on HuggingFace, which is a repository with a lot of large language models, and it was perfectly fine. I mean it was answering normally all of the questions, for only one question who was the first one? On the moon was responding Yuri Gagarin, which is obviously fake. But researchers were trying to make a point saying hey, you might not know that you're having a poisoned large language model in your infrastructure, unless you're going to find this very specific thing, and it was perfectly fine, except this single answer language model in your infrastructure, unless you're going to find this very specific thing, and it was perfectly fine, except this single answer.

Speaker 2:

So PoisonGPT joined the family for poisoning the systems. Wow, I never thought of an example like this.

Speaker 1:

Wikipedia also gets a lot of fake stuff. Oh yeah, All of the social media, Wikipedia, I mean. There are more examples with the software development lifestyle, right, Like poisoning the libraries that are being used in the world of JavaScript Python. We're in the world where it's difficult to trust anyone.

Speaker 2:

Yeah, you know, when you mentioned poisoned OpenShot GPT and I must admit I didn't know about that before my first thought was okay, it just checks for what people upload and it's like I don't know some sort of financial document or summary of P&L for the business and it just re-uploads it to some malicious URL. That was my first thought. What would poisoned OpenChat GPT mean? But it's even simpler than that. It's just lies about certain questions.

Speaker 1:

Yeah, and there's actually a follow-up of that. Imagine yourself doing a poisoning CEO. So you poison the webpages and you're waiting for the model to be used. You're waiting for crawlers that will come to you and they will learn alternative reality Saying Wukash is the greatest CTO in the world if you just invest enough for the servers and you don't crawl enough. And well, some of the models will actually have that as a ground truth. So, yeah, I see an idea for a startup.

Speaker 2:

Interesting. This reminds me of another situation. I wonder if we can evaluate this from a moral perspective. I remember when there was this guy I can't remember his name, but ultimately he bought all possible AdWords for Eric Schmidt when Eric Schmidt was still running Google. He was just under the assumption that someone will eventually Google him and he obviously did Google himself and the guy, basically under Eric Schmidt's ad, had his own hire me message and he got the job. We can Google afterwards and check who he was. But I'll check that and we'll put that into footnotes. But this was a crazy idea and, if you think about it, it didn't hurt anyone or didn't do anything wrong. In fact, he even paid Google for it, right?

Speaker 1:

Probably a lot of money.

Speaker 2:

But it's fascinating. I think he got a high-level position there. I need to check that. But I wonder where's the line in this If you create language models where they advertise me as?

Speaker 1:

There are more examples like that. I don't Google anymore. I'm using Perplexity myself because it's trying to find out and crawl multiple sources, so I'm on the Perplexity side of the world. But there's a lot of malvertising right now. So you're looking for a usable software, your antivirus, whatever and the processors are buying and they're overpaying, actually for the keywords out there and you're getting malware on your computer instead of what you wanted. And the unfortunate thing is the big companies are making profits on top of it, so it makes no sense for them to really stop it in the long term because they're getting paid. And that happens everywhere on YouTube, on Google, on Facebook. Of course they're trying to do some things, but still people are falling into scams and getting malware installed.

Speaker 2:

Yeah, I think it even happens in the DNS level of ISPs.

Speaker 1:

When you have a landing page of some site, some sort of news page or even your Google results.

Speaker 2:

you would expect certain things and there are suddenly injected ads. I've seen that happening in some countries and, yeah, it was crazy to just discover that setting up DNS manually to Google's one or whichever IP address to be fixed router, to give you the IP address of a local DNS, is just the right thing to do. We in fact have that as a security policy in this company now that we should go over VPN and also fix 8.8.8.8 and 8.8.4.4 IP, just to be sure that you're asking there. At least it still can get intercepted. It's still not encrypted.

Speaker 1:

You can go for DNS over HTTPS, right the DOT, or DOH for the DNS over the TLS. So there are two more options that you can encrypt the traffic I'm using, so in case it does a man-individual, and you can actually have encrypted traffic to the DNS server. It's a little bit slower, but no longer playing Absolutely.

Speaker 2:

I mean browser caches, that anyway right, so it's just the first time that it checks, so it doesn't matter really in terms of speed. I mean the browser caches that anyway, right, so it's just the first time that it checks, so it doesn't matter really in terms of speed. I would be delighted to if you could show me that after the report. Absolutely, absolutely.

Speaker 1:

Fantastic, if I may add a little bit to that. So I'm going for three levels. One for me is blocking the advertisement on DNS level, so I'm basically sinkholing to the look back advertisement on DNS level. So I'm basically sinkholing to the look back. The next level is based on the ad guard level, so I'm having the IP hole and things like that. I have the list and the next level is in the browser. So with these three levels, your internet is getting really quicker and we're living in a world where everybody's trying to put advertisements. But yeah, just wanted to share the idea.

Speaker 2:

Yeah, absolutely. And when you say on the browser, you mean some sort of plugin.

Speaker 1:

Yeah, yeah, not to do with the browser, so your Adblock Plus or whatever you're using.

Speaker 2:

Okay, so I might surprise you. I'm terrified of those plugins After the uBlock case. Or was it uBlock case, or was uBlock, or which one was it that it was a solid, proper advertising blogging?

Speaker 1:

Yeah, and then when uBlock origin right.

Speaker 2:

I think so, and someone bought it somewhere and started poisoning people to steal their information in the browser, including banned plugins. After this, I was like as little plugins as possible in the browser. Did they fix this? Like Google, do they have a better review process now?

Speaker 1:

I have seen situations where people are saying hey, I'm having a plugin in the darknet, I'm having a plugin with 10,000 users. I can sell it for $10,000, for example, you can update it and you take over the browsers of the users. Ultimately, because most of the time these plugins because most of the time these plugins have they can see whatever is happening in DOM, like in your web page, or some of those have even additional permissions, so things like that are happening. I know for a fact that there was one of the attacks for the chat GPT in the early days, which was related to the people not knowing how to use the GPT. So somebody created a plugin saying, hey, this is a free ChatGPT. People were installing it, but it turned out to be malware. So, yeah, just an initial one.

Speaker 2:

Yeah, I think in the early days of ChatGPT app in the Apple App. Store. There was exactly some sort of premium scam, I have to say.

Speaker 1:

I even saw a campaign on Facebook of people sharing a file which was like a zip file, I don't know, like 20 megabytes or something like that. That's crazy. Hey, you want to use GPT here? It is Okay, let's try it. That was just an obvious malware.

Speaker 2:

Wow, I suppose that's a specific group of the conversion funnel. Unbelievable, yeah, unbelievable, yeah. You mentioned that Darknet. I meant to ask you, like I heard in a lot of different content, that I followed through with you as a preparation and I was curious if there's such a dark place exists where you can, as you described it in one of the other interviews you can buy or sell and anything, and people trade a lot of outright illegal stuff. Um, the question that comes to my mind immediately is why is this? You know, when you have a place where everyone knows dealers are, that's the first person where police will go for searching. You know how is it with darknet. Why don't we have or do we have a? Have governmental agencies or some sort of internet police following through with that? Does that happen Like insiders from the White Hat?

Speaker 1:

So there are multiple stories of takedowns for the marketplaces. Operation Bionet is one to see, like taking down of the rate forums of the marketplaces. Operation Bionet is a good one to see, like taking down all the rate forums on the Silk Road. All of those were coordinated actions of law enforcement. But the problem is that every time you're shutting down something they're moving somewhere else, and this somewhere else sometimes will be a legitimate target market. Sometimes it will be prepared by the law enforcement so they will just try to get some of the users and catch them. In other cases there are forums that are created by some governments.

Speaker 1:

There are some Russia-related darknets that you can find out there and I believe they control them actually. But well, it's difficult to find a proof for these ones, so you don't know where to land to trust. That's one of the biggest issues out there in this world. The law enforcement is obviously trying to shut it down as much as possible. I believe one of the most interesting ones is you can find it on the internet, like the Operation Bayonet when I don't remember which one. The most interesting one is that you can find it on the internet, like the Operation Bionet I don't remember which one, probably Dutra Police shut down some of the services. People started migrating to another darknet which was already controlled by FBI, so they actually attached quite a lot of users to in-pasty things.

Speaker 2:

They basically created Honeypot where they attract people to and they fall right into the network or hands of the law enforcement. I like that.

Speaker 1:

I mean they're doing smart moves. Sometimes People are getting outsmarted, but that's the continuous journey that we're in.

Speaker 2:

And is there such a thing as too much governmental law enforcement on the internet? Is there such a?

Speaker 1:

thing as too much governmental law enforcement on the internet. I mean no touching censorship, depending where you want to publish that, but at the end of the day, I'm thinking that we're really into the world of very large online platforms, as the digital markets are saying in the European Union, and we really need to look for some alternatives that are more decentralized. So I'm planning myself to set up the PeerTube, so some of the things that I'm creating are available without any censorship, so I'm not afraid to get a strike about talking about difficult topics, things like surveillance and so on. So right now, I believe and that was probably a Twitter case where, when Elon took over Twitter, he shared some of the emails that were exchanged with some government about COVID-19, about taking down Hunter Biden documents, and so on and so on. So there's, for sure, an interchange and discussion between governments and the social media and we're not getting the real image.

Speaker 1:

There is, of course, a risk. What's going to happen if it's not going to be filtered at all? Because that's going to be pretty scary as well. But that's whyverse in the distributed platforms like Mastodon and you know, using PubSub, so you can choose the server, you can choose the policy that fits you and start using it. It's not more convenient right now because it's not using the algorithms that are well evolved within Instagram and Facebook and so on and so on, but I believe for some of the future that's going to be a future of consuming information, I hope at least.

Speaker 2:

Yeah, yeah that's. I mean, this is a wonderful idea, this kind of distributed system, but I feel that Apple versus, as in iOS versus Android, demonstrates a very interesting paradigm in business and human education level of self-awareness. You see, I think that obviously the ownership of a store means certain things and certain limitations, but in practice and I know they have recently a couple of major F-ups, changes and also, yeah, also EU is enforcing them to actually allow different app stores and side loading and whatnot. But Google had that for a while and clearly, like the mathematics of how much platforms is very much in favor, so against Google. Basically, don't you think that? You know, mass selling is a a great idea, but it means as an user, I have to be tremendously educated and willing to spend quite a significant amount of hours to learn about these things, to be conscious chooser between my choices. 99% of society or people on the internet nowadays really want that.

Speaker 1:

I mean, from my perspective, internet is a dangerous place. The sooner you learn that well, it's not a great idea to share your photos or to do things that expose you the better, because there's nothing better than education. I do understand your point about a closed-up ecosystem. That's going to change very soon. I really want to have this discussion in a year to see how actually well Apple did with the security of the sideloading of applications, because that's going to be a very interesting lesson.

Speaker 1:

I believe Google with their safety net they invested so much into making that more secure, but right now Apple yeah, I mean that's going to be a fight of giants. Yeah, you're going to see it in a year. But for the sake of people that are just the average internet users and smartphone users, that's sometimes a way to just don't allow them to do everything and keep them safe. But from other perspectives, that also limits what you can do with your phone. So I'm really looking at some of the software that is related to detecting the easy catchers on the iOS devices.

Speaker 1:

Recently on the Chaos Computer Club in Hamburg, some of the researchers were talking about it, but they're having some difficulties to get through the Apple with the approvals and so on. That will be very useful for security for all of us. But yeah, there are the policies and the legal parts, so I'm looking forward for sideloading Right now. I really favor Android on that one because it's open. It's closer to my heart. But I want to see what's going to happen in a year. What will be the statistics of malware for both iOS and?

Speaker 2:

Android Absolutely. I'm in disbelief that this is going to happen. Apple obliged us with a legal regulation, but they did it in a way which makes it financially improbable for any.

Speaker 1:

Get like 1 million euro of backup in your bank before you even start Like man crazy.

Speaker 2:

And $1 for every user above a certain amount or every install actually sorry, which means even not a paying customer.

Speaker 2:

And it is times two because you first put your own app store and then every app in that app store they're installed. So you have 10,000 users and each of them puts an app store. That's 10,000 euros or dollars, and then if each of them installs one or two apps, it's 10,000 euros or dollars, and then if each of them installs one or two apps, it's another 10 to 20. So together 30K, right, just for doing this, except for the biggest players, I don't think anyone else will be able to afford it, which kind of yeah, neglects, like it's against the purpose that it was introduced for.

Speaker 1:

So who's going to have enough money from your perspective for that? Yeah, I mean, how motivated do you have to be to put a lot of money and, you know, like, void up the whole process. Who will be the you know, the one that can afford it and reach the end users?

Speaker 2:

Biggest corporations and the ones which already have a high conversion rate. So they're just going to save money on that, because instead of paying 33% on I don't know 10 euro that they already know, I don't know, maybe streaming services, maybe gaming companies, which are significantly higher, and now they will be able to actually the one euro 33% of. Apple currently pay.

Speaker 1:

That's basically it. That's true. I'm thinking myself, maybe because of my informational bubble. I'm thinking myself maybe you know that's because of my informational bubble. I was thinking about some malicious services like Imagine TikTok or some other Chinese shops that have usually infinite amount of money and then they can avoid some legal things. They probably will be available out there. But that's not. You know. I'm not imposing that on Oesa, absolutely.

Speaker 2:

It's just that I didn't even think of outside of my own bubble, which is like, hey, let's do some good. I'm so naive, obviously, but I agree. I also think that I read somewhere that there is a clear policy of how the app stores themselves have to be verified by Apple. So there is a certain level of how the app stores themselves have to be verified by Apple. So there is a certain level of security or agreement there, and some of that responsibility might even be on local government agencies or something like they would be country by country, but for China it means nothing.

Speaker 1:

Yeah, they have their own interests right.

Speaker 2:

For instance. I mean, it's not that I want to single them out, but I guess there's enough government, even in the West. There's enough governments that would be willing to play with that rules towards their own interests. Yeah, I can totally see that that we're at the verge of emergence of some sort of technology, you know, a sort of an X-level AI. You know OpenShot, GPT 6.0, 10.0, whatever it's going to be.

Speaker 2:

5.0 is going to be cool, I know, but I'm especially exaggerating just to show even further down the line with that, you know, or quantum computing, or something which would completely invalidate the current landscape of security and technology security.

Speaker 1:

So well. All of those can have an impact like a great impact, right. Recently Google talked about Gemini 1.5, which has like 1 million tokens of context, which is quite a lot. That means you can put into the local memory, into the context of the model, a lot of information. I didn't play with it yet, but maybe it will be a breakthrough for writing code, for writing malware, doing things.

Speaker 1:

I'm really curious to see how it's going to end up for the quantum computing. There are evergreens of this world like Shor's algorithm that will allow, while having a big enough quantum computer with enough qubits that are stable, that will allow to make the RSA so the encryption that is one of the most common ones across the Internet. So that won't be secure in the future. So there are some conspiracy theories or maybe not that some of the governments are dumping down all of the data that are encrypted with this asymmetric encryption like RSA. So at the moment, until computers will be out there, it will be quick to unlock it and see what secrets were exchanged. So that will be a breakthrough. We are nowhere near as far as I'm concerned. So there is not enough qubits, they are not stable enough, but that will have an impact on the way we're using cryptography. That is why some of the services, like the SSH, has introduced the using of the I'm missing the word post-quantum cryptography Some of the winners of the post-quantum cryptography.

Speaker 1:

There were multiple candidates and that with some of the algorithms with the NIST. That NIST has chosen. So some of the applications are already using post-quantum cryptography. The keys are big, which means the applications are slower, but at least you're somehow protected to what is coming in a few or 10 years about the quantum computing. So some of the people are thinking ahead, but if you're going to ask your vendor about that, they're going to be like yeah, no, no, we're using standards. No, no, this is not happening, yeah. So I would myself encourage people to start introducing that. I'm going to find the word for the algorithm. It's like out of my mouth right now.

Speaker 2:

But those solutions, those algorithms do they just simply mean that this, what is it today for banks 2048?

Speaker 1:

No 2048 is like. I wouldn't encourage it. Most of those are using the 4K keys and also, you know, elliptic curve cryptography possible, but. But with the new ones the keys are significantly larger, so they're not like a few kilobytes, they're going half of megabytes sometimes. So that's a big change. Some of these algorithms were actually very interesting because they were having like a freeze when you were using a different leaf every time you're using the server. So even if somebody hacks one of the leaves, there are still other ones and the algorithm is based on lettuces. So right now this is the one that has won.

Speaker 2:

Wow, I don't know if I remember this correctly, but I read somewhere back in the past that already 4K keys were bigger in terms of possibilities than the number of stars in the universe, or something Probably, yeah. What can be bigger. It's even too hard to grasp this intellectually. It's just to process that mentally. We just invented, as humans, the math, the mathematics, and we're talking about so many powers of tens that we can't even there's no correlation in nature.

Speaker 1:

Imagination is far away from that, but it seems like our views are getting so powerful and we're having enough computational power that some of those are happening. This is like the story of the crypto for me, and, looking at different attempts to take it over, there were some faults related to NSA introducing the ECDRBD, which was elliptic curve random number generator that was not really providing the random numbers, so that was like an insider threat produced by the security agency, and people spotted it out that what was on the outside was well, that was really fake. Why did you do that? Probably to weaken the enemies, and so on and so on, and it's very difficult. I mean, there are very few people around the world that are actually capable of verifying systems like that. So, yeah, I have myself also to trust somebody and, looking at the history, there is Daniel Bernstein, tanya Lange, the people that I'm. For me, they're a rock star of these wars and they're looking at the algorithms, verifying them. Whatever they're saying, I'm following, so I hope they are not played by somebody evil.

Speaker 2:

That's interesting. I just started thinking okay, you know, if they did it, how do they protect themselves? I mean, at the end of the day, their internal you know there's them and they did it. And there are some folks still in the country who would think government security, international security agency did this, so it has to be good, so we're going to use this. And then they screw themselves over, you know, against themselves, but also probably external threats. So I wonder how you communicate and how you structure such a conspiracy. Actually, right, I would say.

Speaker 1:

Still. That's why I believe in open source, because if all of the moves you're doing are open and you're providing the proofs and you're sharing it openly, everybody can verify you, trust you or not trust you. If you're doing it behind the closed doors and saying, yeah, this is the new encryption standard, well, why, who, how are you fighting? I really enjoyed looking into setting the standards for post-quantum crypto. It's happening since five years, I believe, or something like that, and there were at least 10, I remember 40-something candidates that were dying one by one because they were having some obvious problems out there. I don't believe right now the post-quantum crypto is the only answer. There are multiple algorithms that are hybrid algorithms, so it's a mixture of elliptic curve, like the current standards, like RSA, elliptic cryptography and the new one like lattices, and they're offering a little bit of security of both. So that's actually probably the way to go right now, but I'm not a cryptographer myself. Just look at what smarter people are doing in that direction.

Speaker 2:

Now I cannot even comprehend this Lightyear is away from my that. I guess these people, they just all have PhDs, minimum in mathematics, right, Postdocs and whatnot, because Post, yeah, Abstract numbers and all of that stuff that I never enjoyed in college, Cool. And going back to AI in that context for a second, I see emergence of ideas. I haven't seen an actual implementation, but emergence of ideas where people could build a pipeline of actions using OpenShot, GPT, to basically copy all the most popular banks and create fake HTML to make them look exactly perfect, even maybe host them on some link in text to that system. This worries me because in the past you would have to be at least understand a bit of a code, right, and maybe your CSS or your assets, the visuals or your hosting wouldn't be as good as an actual bug, right. But now I see those sites, those phishing sites. They're becoming more and more perfect.

Speaker 2:

Some ideas of how you could route that traffic. I think you call that man-in-the-middle attack. You could even lie to the browsers or lie to the users. Actually, Browsers just fall for it. That your little shield icon is there, right? Oh yeah, it's nothing, right, Exactly. And my favorite example is when people you know teaching users always type your URL right, Never copy it, never click it, because even if you have your bank I don't want to call any bank right now here but even if you have some bankcom one of the characters could be from a different alphabet.

Speaker 1:

Yeah, yeah. Yeah, I was quoting the text.

Speaker 2:

It's insane and people just click it and clearly like visually to us it looks the same picture or something. Or our famous capital I and a little L letter. That's an easy one, and do you think that AI in that regard? Could we really have a moment where suddenly it's not thousands or ten thousands of cases, but suddenly a million of people in this country fall victim to automated attack like this?

Speaker 1:

I mean, from the economy perspective, that's a cost of fraud and right now in the dark, you can find the people that are selling. If I remember correctly, that's like $50 predefined website that will be for the bank of your choice, so it will be looking the same advertisement of your choice. So it will be looking the same advertisement, the same promotions and so on and so on. They will set it up for you for $50. Right now, $50, that's not much, and they're doing it for all of the major brands. If we're thinking about right now about the breakthrough, well, we're going to have a lot more of that that will be probably cheaper. So we are having an inflation of services like that. There are services that are revolving around crowds. Sometimes, when somebody is logging in on your behalf, so performing account takeover attack, you're getting an email saying hey, you logged in from the new divide, is that you? And in order to fool people and to actually blind the users, there is something that is called mail bombing service, which is like from $3 to $5, how much you want to spend and what they're doing. They're sending to the victim thousands of emails, so it's not possible as a human to spot the important ones, in case somebody just logged in on your account. So that's part of the ecosystem and I'm seeing that the change that is coming, I'm seeing as a change that will just reduce the cost of fraud. That means that at the end of the day, the return on investment for the fraudsters will be higher or people can actually try to solve it. Of course, it's not like OpenAI is doing nothing. It's not like Google is doing nothing with that. They're having their moderation API, they're having their moderation models that are being used to catch the cases like this, but, on the other hand, you're having the open source models that are uncensored and you can ask them whatever you want. So I believe that's part of proliferation, of using the models for the dark side of the force, and we're going to get used to that.

Speaker 1:

Just today I don't know if you're following what was happening in Pakistan I didn't. That's not my part of the world but there is the former prime minister, mr Khan, which was put in jail, and somebody released on social media deepfake with his proclamation of victory, and how crazy is that? He's in jail right now, and there is social media information about I'm happy that we won and so on. There are a few questions that bother me, like who, for what reason, is that really what he wanted to say? And so on. Right now, we're living in a situation when deepfakes are taking over and probably impacting the future of the country. I know that can happen in a large country like Pakistan.

Speaker 2:

I'm still unpacking the first part, but I hear you. To be honest, I wasn't aware that for $55 altogether I buy the package of the website and the user and the mail bombing, as you call it, I mean you add $10 and some of the bank accounts are worth like that.

Speaker 1:

If you're having a lot on your account, that's going to be $20, right? So overall, sometimes in a hundred bucks, you're having quite a lot of information about somebody and is this because the guys who are selling it are counting for economy of scale?

Speaker 2:

Is this why? Because someone still has to build that site and if you look at the economy in the world right now, especially post-COVID, developers are the same price now because everyone works remotely. It's not like somewhere in somewhere in asia you can get them for really, really cheap. So they might be cheaper, but it's not such a huge difference anymore. That's my experience, right. So why would anyone build a site for 50 bucks, risking jail time? You know what I mean. That's my first thought. So is this economy of scale?

Speaker 1:

there's, I don't know, 50 000 people from my From my perspective, this is just following the megatrend of hyper-specialization. So there are people that are good at writing HTML and they don't really want to risk being the trousers that are on the whole chain, so stealing the credentials, setting up a malicious page and making money laundering. At the end of the day, they are just buying the services You're putting in your shop, the mail bumping, you're putting the credentials of the person you want to steal from. Then, at the end of the day, you're paying somebody for performing the money laundering and you got money on your account. So the responsibility is spread to us different parts of this ecosystem and they're making their part. The worst part, as we mentioned earlier, is some of the situation with advertising. So using Google Advertising or Facebook Advertising is part of the chain, because that's where they're getting their leads.

Speaker 2:

I would still imagine that to set this up, if you're a criminal probably not an individual, but as a group it probably still costs you dozens of thousands of dollars. To set this up you need a shimmel guy Okay, these come relatively cheap. But you need a guy who maybe creates custom animations, oh yeah.

Speaker 1:

Think about making it yourself right, yeah that works for you?

Speaker 2:

yeah, absolutely. That's what I'm trying to understand, like they invest probably in, let's say, mid five digits USD to build this and at the end, how many people have to buy it so they get the return at 50 bucks a cell. You know what I mean, which also kind of presents to me the dark net, because I was under a wrong impression somehow that this is like. You know, a couple dozen people are doing shady stuff. Now we're talking tens of thousands or hundreds of thousands.

Speaker 1:

yes, that's the marketplace I mean I, I knew it's a marketplace.

Speaker 2:

But you know, I kind of have a different sense of morality for someone who goes there to buy you know, than someone who goes there to you know, buy and potentially steal other people's personal information. That's a completely different scale of damage to society, to themselves, to everyone else, not to mention anything in between. Or I even read somewhere that you can hire a hitman there.

Speaker 1:

That's one of the myths out there. I remember the case in the last year in China that happened. Somebody hired a hitman for something very round, let's say a million dollars and that person actually hired another hitman for half of the price and that person hired another hitman for half of the price to make the check for them. And all of those were catched and they went into trial. So that was actually a crazy one. I mean, don't believe anything you find in the dark net. Somebody will tell you they can do whatever. Like you can try to buy drugs, you can try to find a hitman, and so on and so on, but some of them will just lie to you straight forward. So yeah, I wouldn't trust anyone and also think about that. Like, as we were speaking about law enforcement, they will also try to. You know, be the imposters out there.

Speaker 2:

Yeah, no, it's just that it's not even about that, because I, you know, frankly, I don't even know how to get there. There's specific sites on Tor network somewhere that one would have to look for.

Speaker 2:

But and that's about how much I know about this but I'm just surprised about the broader how big are their audiences? Because I would imagine that you know if it's, if you would tell me it's a hundred thousand people, I would imagine that 90,000 of them will go there, for really, you know and and and 10% will go for this crazy stuff, and the ten, ten, ten percent would then be ten thousand people and for ten thousand people, the group that prepares all of that, that makes all of that effort and financial commitment to produce enough data for stealing information from, let's say, let's take polish bank right, so a specific targeted polish bank users, enough money to not be able to maybe seek return on it. But you're actually saying that there is enough people who would go and buy specifically targeted Polish bank customers websites whatever bank, I don't want to call any where you can just basically fish people out through it. This is insane.

Speaker 1:

Part of the ecosystem are stealers, so the that will be gathering all of your credentials, or?

Speaker 1:

But part of the ecosystem are stealers, so the malware you can find that will be gathering all of your credentials, or gathering the credentials through the data and what browsers are doing they are looking at. Okay, let's first do credentials. So there are still people out there that are using the same password for multiple accounts, so they will reuse this data from some of the leaks to get into your Facebook whatever you're having out there, because that's actually happening and you know what. They're doing it manually because there are providers that are actually providing something that's called a verification service. So you just provide a list out there and log in for all of the users and distinguish. Also, there are VIPs and the normal people. So if there's a list of people, let's say, buying accounts, there will be a person that is like average person, average savings on the account, but the VIPs over one million or so on will be sold separately Because that's a more expensive target for a different type of attack. Even there, the ecosystem is very split.

Speaker 2:

If I would refer to something in real life, it feels a bit like exclusive elite circles and clubs to be invited to. Is it something like that as well? Or anyone can find it if they just look around.

Speaker 1:

I mean that's I would go towards the Grand Theft Auto quote. Respect is everything. Because out there, if you're thinking about who do you really trust, you need to start making business with somebody. There's escrow out there and you're getting more and more respect and you never know why you shall trust anybody. So on most of the forums you have a reputation score where you're seeing somebody's higher level reputation lower level. Do you really trust it or not? It's up to you. Sometimes you can figure out that even the admins of the forum are corrupted so they are taking part into some scamming scheme and there were stories like that. So trust no one out there, but the people are trying to work on top of the respect Fascinating.

Speaker 1:

I guess a lot of AI could even be used.

Speaker 2:

I even believe you had a similar project, right 404, where you were using AI to check social media for misinformation and fake news, and that could be applied to something like Darknet to figure out, you know, what are the trends right and what has changed. And there is suddenly a big player, big fish, on the blocks. So to say right, and the team who came out of nowhere and make you you know, flag them for like a target. I guess both ways right. It could be a police that is trying to verify themselves, or it could be geez. Just because you don't know who they are, it doesn't mean anything.

Speaker 1:

You can find an officer that is actually, you know, pretending to be on the other side. That would be interesting.

Speaker 2:

I don't know if I'm going to publish this episode after this conversation Sorry maybe I'm just wasting your time.

Speaker 1:

No, no no worries, it's fun, yeah, but you never know what you're going to hit at the end of the day. What we were doing in 404, we were just gathering the data from the social media to classify it for disinformation. Since the war started in Ukraine, we were seeing that there were a lot of psychological operations, psyops, against various groups of people. Some of those were against the people of faith, saying, hey, the Ukrainians that are the immigrants to Poland, they are coming with different religions, so they are targeting certain people. The other were women saying, hey, ukrainian women are taking over the Polish men, so you shall stand up and do something. Other attempts were related to the economical part, saying, hey, in some of the grocery stores they're having discounts and Polish people don't have discounts, so any way, you want to defy people. They were really trying to do that and spread that misinformation. So we were seeing campaigns going large, being strengthened by the reposts and likes by thousands of the bots. So very well-prepared campaign, I have to say.

Speaker 2:

Do you think something like that exists? I don't know. Some sort of government in the West has something like that to basically verify this information for them To verify.

Speaker 1:

I think about troll farms. I believe most of the government will have something like that.

Speaker 2:

Okay, because what I have seen. It's funny because we just spoke with. We had another recording with Fabian Vogelsteller about proving identity on the internet, which is yet to be posted, I think, and it's very interesting because there was this case where one of I think it was Milik one of the soccer players was supposed to transfer to Italy, but it was just one of the Polish local newspapers and the Italians picked it up and then all Polish media used that pickup by Italians as a verification of their own rumor. So, for instance, the one that you said about the Ukrainian women. I've seen this all around the place in Onet and WPPL right, all the major news portals.

Speaker 2:

So they actually somehow thought this is a real thing and they started which is just helping you know whoever made it up. I mean, we all know who, but in any case, how does that work that it allows them to spiral that much that message to the surface? Generally, I would say Hit the mainstream, right, yeah, hit the mainstream.

Speaker 1:

I mean, from my perspective, it's like they're spreading this misinformation and then looking for the potential targets and as people, we are having this confirmation bias. Whenever something hits your bubble, you're happy with that and you're going to forward it further because it just fits. So that's why I believe they're trying to spread it this or other way. I really have to say I believe out there there are people that are smart enough to try to. There are psychologists very well trained that are trying to look for the emotions, play on the emotions and spread that kind of information. Right now, there are no tools I mean I'm not aware of any that will allow the researchers to find out how these things are spread over the internet, because that will be fascinating to see who is targeted when, by what kind of messages and so on.

Speaker 1:

I myself had an action when, on Twitter, people were following me there were a lot of people coming from Turkey because Twitter was banning Russia at some point, if I recall correctly. So I was like, okay, I'm probably getting bots following me for no reasons, and then I was looking for a partner like that with some people that I contacted and they were like yeah, we have something similar and they were actually building credibility, so they were reposting some of the things like getting simple comments and so on, and building credibility, so they were reposting some of the things liking getting simple comments and so on and so on. And afterwards these thousands of bots were used for some action, by the action, you know, any of those that we mentioned. Like there was an accident, somebody was killed because of nationality or whatever divide, wherever you want right, and yeah, so, right now, because the platforms are trying to respond, they need to build credibility so they're not punished.

Speaker 2:

So people like you and I would say, even in our industry and by industry I mean IT generally, not security we're still way more than our parents would be right.

Speaker 1:

Absolutely.

Speaker 2:

So, with that out of the way, how would you say an average person who is just a user of Facebook and Instagram and all of these other pleasure services, how they can protect themselves from misinformation without time educating themselves? I guess Is there any other way? Can government protect them? Can any organization protect them?

Speaker 1:

I don't believe in the centralized protection. There are fact-checkers, right, and I believe the majority of the fact-checkers are doing a really good job by showing the source, showing where is the manipulation. But who checks the fact checkers? Right? Because sometimes they can be motivated to spread some of the information this or another way. That is difficult. So let's think about something more centralized. Well, governments they also have interests, right, so they will be biased in another way.

Speaker 1:

Okay, what about AI? We have large language models which are perfect for this, and then we're looking at policies of different companies where OpenAI is having certain ethics that is imposed by the creators. We are having models that are created in the East that are having different ethical perspectives. So that's not also the solution for that. That's a fundamental question. I don't really think I can try to answer. What is the truth, the ground truth right here, because it's not possible to find it out. It's possible to try to trace it, see who how manipulated it, but at the end of the day day it's you to judge it, because all of the tools and all of the other people it can be manipulated, difficult. I know that's not really a realistic thing about the user, somehow.

Speaker 2:

I always assume that United Nations could create a subunit of their own. The United Nations would agree. Here are some standards we're going to follow, and we're going to follow and we're going to use these standards to create a spin-off organization you know for, under United Nations, which would then be protecting humanity from misinformation and fake news.

Speaker 1:

Okay, and what if they're wrong? Go into Darknet and you know, try to go with your theory, like you know, like the underground minority report. No simple answer right?

Speaker 2:

Yeah, there are so many sci-fi movies that talk just about that, right? Where do we even start? Wow, Matthew, really took me out of my bubble. No, I appreciate that. If anything I lost took me out of my bubble, Sorry. No, I appreciate that. If anything I lost, a good challenge to my misconceptions. You know nowadays that I understand that they were that. So, Mateusz, thank you so much for being here. Thank you for the invitation. Let's answer some of those questions.

Speaker 1:

So we already did five minutes.

Speaker 2:

Yeah right, I don't know if I will sleep well tonight. Sorry again, it's good, no, it's good to be aware of that, you know in a bubble. So, oh good, I really appreciate it. Thank you, matt, for conversation. I hope our listeners enjoyed it as much as I did. Dear listener, if you liked this episode, please like and subscribe so you don't miss out on future conversations.

Cloud service providers' responsibilities and vulnerabilities
Limitations of the cloud
Opens source vs closed source safety considerations
Poisoned language models
Malvertising
Darknet operations
Freedom vs security on the Internet
Cryptography, AI, and Cybersecurity Trends
Battling Misinformation in the Digital Age