We're sunsetting PodQuest on 2025-07-28. Thank you for your support!
Export Podcast Subscriptions
cover of episode BlinqIO: meet the AI Test engineer that never sleeps

BlinqIO: meet the AI Test engineer that never sleeps

2025/5/1
logo of podcast Lexicon by Interesting Engineering

Lexicon by Interesting Engineering

AI Deep Dive Transcript
People
T
Tal Barmeir
Topics
我是一名连续创业者,专注于软件测试领域。我关注市场趋势和软件测试产品的战略理解。 我认为AI提升了工作效率,它帮我处理了大量重复性工作,让我能够专注于决策和更具创造性的任务。AI就像拼写检查器,它提高了效率,但不会取代人类的工作。 学习使用AI的最佳方式是从日常生活中开始,例如使用ChatGPT进行旅行规划、搜索信息等,然后再将其应用于工作中。使用AI工具时,需要验证其结果的准确性,因为它可能会出现错误。 当前AI领域最大的趋势是大型语言模型(LLM)引擎的不断发展和专业化AI代理的出现。我们公司开发了一个完全由生成式AI驱动的软件测试工程师,它可以自动编写测试代码,从而加快软件发布速度。 AI 的最大优势在于它能够快速处理大量数据并得出结论,但同时也存在被滥用的风险。AI的一大优势在于它能够持续高效地工作,不受情绪或其他因素的影响。关于AI数据隐私的担忧可以通过合规性和监管来解决。 BlinqIO 的 AI 软件测试能够替代人工测试和传统自动化测试,提高效率并降低成本。BlinqIO 的 AI 测试能够通过消除人为错误并实现大规模测试来提高软件质量。BlinqIO 的 AI 测试技术已应用于游戏、医疗保健和银行等多个行业。 企业采用AI软件测试面临的主要障碍是员工对AI取代其工作的担忧。AI不会取代软件测试工程师,而是会改变他们的工作内容,让他们专注于更具战略性的任务。 BlinqIO 成立的初衷是利用AI彻底改变软件测试行业,未来五年内,BlinqIO 将致力于实现软件测试流程的自动化和无缝化。 2025年AI行业将出现更多专业化的AI代理,同时合规性和透明度将变得更加重要。每个人都应该拥抱生成式AI,因为它能够提高生产力。

Deep Dive

Shownotes Transcript

Translations:
中文

Welcome to today's episode of Lexicon. I'm Christopher McFadden, contributing writer for Interesting Engineering. Today we dive into the future of AI with Tal Barmier, CEO of Blink.io. From AI-powered software testing to workplace transformation and 2025 predictions. This is an episode you won't want to miss. Let's get into it.

Gift yourself knowledge. RU+, is a premium subscription that unlocks exclusive access to cutting edge stories, expert insights and breakthroughs in science, technology and innovation. Stay ahead with the knowledge that shapes the future. For our audience's benefit, can you tell us a little bit about yourself, please?

Yes, so I am basically a serial entrepreneur. This is the third company that me and the other co-founder, Gaia Rielly, have founded, all of them in the area of software testing. By education, I'm from the business side of the house, so very much following on market trends and everything that has to do with a strategic understanding of markets and products, specifically in software testing. Okay, great. We'll move on to the first question then.

How has AI reshaped the way we work? And what do you see as its most profound impact on jobs in both tech and non-tech industries?

So I think there is a lot of worry around of how AI will affect the job market and how we work. But from what I see, you know, and I can testify from the areas in which I use personally AI, which is a lot about marketing, content creation, research, legal, admin operations, it

it's very much a booster of productivity for me. So it's in no way taking my position, but it's removing a lot of the very mundane work that I had to do in the past in order to get to a situation where I could take decisions and, you know, perform any sort of discretion. So I think

you know, like any other new technology, if we use AI to do those type of repetitive works where less of the human imagination and discretion is required, we can actually make our lives much more

productive and also much more interesting. And, you know, I see that very clearly in tech areas such as programming, product design, also in non-tech industries, you know, where such as travel and others where a lot of the planning communication is all done today through, you know, using AI agents and other similar technologies. Yeah.

I see. So kind of looking for an analogy in the past, should we compare it to something like, think about it like a word processor or the Adventist spreadsheets or a database or something?

Yeah, I, you know, many times I say that it's it's very similar, you know, to things like, you know, a spell checker. Right. So at the end of the day, if you have it, it's so much, you know, more effective and less repetitive work needs to be done. And it doesn't really take anything from you apart from that work, that part of the work you didn't want to do anyway.

Yeah, exactly. Yeah. It's a booster, isn't it? A tool. Think of it as a tool, not a replacement of you, basically. It's how I see them anyway. Okay. For those who aren't AI specialists, which is probably most of our audience, what are the best ways to start learning about AI and integrating it into their work? So I think, you know, a bit similar to computers or internet, you know, you can start working with it, first of all, for your day-to-day work.

things not related to work, you know, just use chat GPT, it's free of charge or any one of the other, you know, AI engines out there, start planning your trips, searching for stuff, comparing, you know, I find myself sometimes in dinners, you know, somebody raises a question, and there's sort of a small argument around what's actually the reality around it. And,

you know, we just bring Chachipedi into the discussion and it sort of spices up, brings a lot of new information. And I really think, you know, the best way to start working with it is just day-to-day stuff, a lot of travel planning, purchasing, all that type of thing. And even, you know, history and understanding a bit, you know, politics and all sorts of things. And then

From there on, once you understand how to converse with it,

used also for work aspects. I've heard some people use it as like a replacement to Google because it's just faster and you cut out all the paid sort of search results, return results, you know, it's just quicker. They use that as a start, like a Wikipedia kind of search for something you don't know anything about and then build on top of that with real websites, you know. Yes. So,

So I think that's actually a very good way because, first of all, it's much easier to search with it because you really speak to it like you'd talk to a person. So there is no sort of manipulation that you need to try and do on, you know, the search words or the way you actually ask things.

So I think it's a very good way to start with that. But it's very important to verify at the end of the day against actual websites and results because it does have hallucinations. So if you're only going to count on that, you might find yourself

you know, counting on stuff that doesn't exist. It happens rarely, but it does happen. So there should be this level of awareness. Yeah, fair enough. I think I'm right. ChatGBT started giving you sources anyway, hasn't it? Depends on the subject, but it'll give you links at the end, right?

Yes. And if it didn't, you can actually ask it to add them in. So you can ask, you know, if you can add, you know, where I can find, you know, whether, you know, it's websites that support this or where I can actually book things that I find interesting and so forth. Yeah. Check a fact. Where the hell did you get that from? I say, oh, actually, I found it here. Yeah, exactly. Yeah.

Fair enough. Okay then, what do you see are the biggest AI trends at the moment and how do you think they will evolve over the next year, in 2025?

So I think the biggest development, so I think there are two aspects. One, the LLM engines themselves, meaning the likes of OpenAI, which is what stands behind ChatGPT and similarly Gemini and more, they're basically becoming more and more intuitive, more and more capable. You really reach a level where they can, you know, if in the past,

They, you know, at the beginning they were limited with the level or information on which they were based. It was up until, you know, a year ago or something like that. Today, everything is up to date. They can really check everything and help you as much as you would if you had the most experience before.

person looking at the internet and trying to search for things. So first of all, they're much more up to date. They're much more intuitive in terms of how you communicate. They're much more capable. So if in the past it was only text in, text out, today you can actually upload, you know, presentations and ask them to create a blog from that or vice versa, or you can have them, you know, draw things for you and so forth.

So that's one aspect. These LLM engines have become much more capable with different types of inputs and outputs apart for text. The other aspect of that is that there is a whole industry that is building specialized agents on top of those engines, which are capable of doing all sorts of things from

you know, things related to culture, to art, to education, to medicine, to software. You know, for example, we're doing something related to software testing. So this additional layer of AI agents

is making the usage of the power of the LLM engines, the sort of AI brains, much more easy to use for a specialized area and much more effective in the usage towards those ends. So these specialist agents, how does that work? You kind of ring the fence, like stay in your lane, basically. You give it like a set of questions it can answer or your own data. How does that work?

You can think about it like a specialized person, right? So if you go to a medical doctor and you ask him a question about medicine, he'll be able to give you a more accurate and deep answer than someone who's specialized in medicine.

I don't know, high-tech education or, you know, transportation, so forth. So it's very much like that. It sort of creates a expert level on top of the general sort of knowledge. So it's not just an intelligent person. It's an intelligent person with a specialization in a certain area. So I think that's the easiest way to think about those sort of AI agents which are built on top of the LLMs.

I see. And that's not, well, so you, it's for your company. So it has knowledge to some of your prior proprietary IP that's not fed back into the main chat GPT system, is it?

How does it work? No, so we actually created the first software test engineer that is totally operated by Generator VI. So you can think about it as a, you know, in software testing, you can manually test, you know, applications or websites that you're about to release.

to the general market, or you can actually create software that automatically tests it. And the reason to do that is obviously when you have very frequent releases, you want to do that very quickly and not wait on people to manually test it. So this automated testing requires today programmers, which can program this automated test to run on the software or application you want to release.

What we did was create an AI test engineer, which is able to program these tests without human involvement. You do have the human in the loop sort of supervising his work of this AI test engineer.

But the AI test engineer receives the test requirement, what sort of was expected to happen using the website or application under test. And it then creates code that is able to automatically test that functionality. And doing that...

severely cost the time to release software to the market because today that's one of the biggest bottlenecks in software release cycles. Who's going to test and how quickly can they do it before it's actually released to real human beings for usage? So that has now been pushed aside with the AI test engineers at Blink.io, the company that Guy and I co-founded our offering.

Great. Yeah, we'll go into a bit more of that in a little while. But moving on. So while AI offers immense potential, it also has some drawbacks. You kind of mentioned a few already. In your view, what are the biggest advantages and challenges AI presents today? So

I think the biggest advantages are it's really fast forwarding things that would have taken ages to complete. At the end of the day, AI in a way is like synthetic human brains, only in huge scale and their ability to connect together. So it's not like many smart people in one room. It's like one mega smart person in one room. So

the ability of AI to process what a human brain can do only in much larger scale and in a much more connected way.

brings very significant advantages to humanity in general. So, for example, things like analyzing results of research of new medicine on very large amounts of people, the ability to dissect information, understand and attribute differences

different types of results and insights based on that would take or has been taken in the has been taking in the past, you know, it can take years, when you have very limited number of specialized people that can do that, their time is limited. And you know, they have other things

apart for their work in life. When you actually put that task in front of a sort of an army of those AI agents, they can basically do that overnight or in a few hours. This means that new medicines can be released much quicker. And this is just sort of to a

a high level illustration of the type of advantage AI brings, but you can think about that in other, in many other aspects, you know, space research, anything that has to do with, with mining and, and,

and putting intelligence or understanding into very large quantities of data from very different domains where, you know, you might not have one person that has all this expertise. AI can have expertise across all,

different multiple disciplines and use it in order to get to conclusions. So the advantages are huge. I think this is bringing humanity very big value in a very short timeframe. The challenges in AI are very much around

probably like any other technology that you have, what are you using it for? Are you really using it for the better benefit of humanity? An example similar to it, to which I've just mentioned, or are you actually using it for, you know, improper purposes like, you know,

you know, analyzing and researching, you know, face recognition of people to track, you know, what they're doing and how they're doing. So obviously any technology like it was in the past can be taken to good uses and bad uses. And the challenge is really to manage the use of generative AI for the positive and good value creating purposes and not the other ones. Mm-hmm.

Another benefit I see of things like AI is obviously they're completely synthetic. They're not alive. So they can perform a task just clinically and efficiently. They don't have to worry about, oh, I've got my mortgage payment coming up. Or I didn't sleep very well last night. You know, I've argued with my wife. You know, it's not something interrupting the process. I don't know how you feel about that. I'm just completely off the mark, really. Yeah, interesting. Yeah. So another...

Another issue some people might have with AI, especially with chat GPT, if you're talking to it, you shouldn't really give it personal information, but the concerns that how is it saving that somewhere that it can use against you in the future? OpenAI says it doesn't. Do you feel there's any kind of risk there? Or is that just a bit of a red herring, really? It's not something to worry about because you talk to Google anyway when you're searching for stuff.

Yeah, I think, you know, first of all, in, you know, OpenAI, you can, you know, you can have it use or not use the data that you're actually putting into it. So it's very much your own decision. And there are, you know, different type of plans there, which do use or do not use the data you insert into it. So it's under your control in that sense.

And I think that similarly to other domains where personal information is collected, you know, you have compliance, uh,

and other methodologies to make sure that this data is not used for bad or unintended purposes, such as GDPR or any one of those compliance standards, which really guards the information you're sharing to be used for specific purposes or just not to be saved and kept at all for any purpose. So

This is something that compliance and regulation can control very similarly to huge amounts of data that is stored on any and every one of us and is being controlled today through these compliance standards. Okay, that's great.

That's fair enough. I'll move into your product then. So Clink.io provides AI-powered software testing. So how does AI-driven testing compare to traditional manual or automated testing methods? So

Essentially what we're doing, we're just replacing the sort of human brain of a tester with generative AI. And we've created this sort of testing brain, synthetic testing brain, in a way that it's capable to do any manual testing or test automation tasks. So it's able to create, to manually test a website or application against software.

certain set of test requirements, but it can also program for you test automation code to automatically test the application or website. The big advantage of that is that most of the work that is done in testing is very repetitive and quite boring. Just the maintenance, the ability to constantly update the test automation code

against the changes done in the application or website UI is a very mundane task. And most of the people don't like to do it. As a result of that, you don't have a lot of people, a lot of good programmers jumping to be test automation coders along a long career. Usually come in for a while and try to shift over to the product or research and development engineering side of the house.

With generative AI, BlinkIO enables to create an infinite pool of those test automation coders that are working day and night creating this code and maintaining it against the test requirements, even when there are significant changes in the application and website UI.

This means that if in the past to release software would take you sometimes weeks because you needed to test every new release manually or update the test automation code, which would have taken quite a while because you're always in shortage of test automation orders,

Now all of that process has become seamless and you can very quickly create new software and release it. Anything from banking applications, medical applications, gaming, entertainment, travel, and basically any type of software application you can imagine. Excellent. Presumably it's much faster than human testers, I would have thought.

Yes. And by the way, the human is not out of the, there is a human in the loop, but he becomes sort of a manager of all of those AI test engineers working for him. So in a way, it sort of upgrades very quickly a person instead of, you know, spending several years as a test automation programmer.

hoping to become the team leader, he's actually put in that seat from day one and he's overlooking, you know, 10, 20 of those AI test engineers working for him and he approves and audits their work, but he doesn't need to do the mundane work of all the time chasing the changes in the application of website UI and updating the test code against those changes.

I see. So as it's testing it, it can correct it as it's going. Is that right? Exactly. And would it add in comments into the code so other coders can see what's going on? Yeah, sort of. That's great. So how does the BlinkIO's generative AI ensure software quality at a higher level than existing solutions?

So think about it like a mega test automation engineer. Generative AI doesn't go to sleep, doesn't take vacations. It never makes mistakes. So one of the things we do, we remove all the ability to do any of those hallucinations, which I mentioned before, of a dreaming of things that don't exist. You can actually calibrate that and we calibrated it to zero.

So it's called temperature in some of the models and differently in others, but it basically means we removed its imagination, right? So it's only very knowledge-based information.

actions. But what it does, it just does the work that humans do without errors quickly in any, there is any number of those test engineers as you wish, working night and day. So in that sense, the quality of software is much better. You need to understand that today's software testing is done

on a risk assessment basis, meaning never does all of the software happen to be tested before it's fully released. And the reason is that there is just no way companies can do that. So they usually select with some sort of risk management model only to test, let's say, on iOS or only on Android devices or only in English or only in Spanish, or you have all sorts of

of trade-offs only to test parts which were modified, but not test other parts which were non-modified, but might have been affected and so forth. And actually with generative AI, you don't need to do all of this risk assessment and take all of those steps

sort of informed risks because you can actually test everything. The ability to test quickly in scale in a fraction of the cost is something that brings out quality software which has never been seen before.

Excellent. The one area that benefit greatly from this is computer games. And at times, I'm a bit of a gamer, at times I bought a new game on release and it's incredibly buggy. So something like this would be, if they just run the code through something like this, that would have been useful before release. Exactly. Do you have any clients from the gaming industry at all?

Yeah, definitely. We have clients from the gaming industry, also from other more serious industries such as healthcare and banking and airlines, all these different companies. At the end of the day, what they're doing is software. And if you can think about a gaming application, how many elements do you need?

how much, you know, animation scoring, all of that stuff has there that needs to be tested every time a new version is being released. It's madness. I mean, there is, and that's why the eyes come out very buggy or they come out very slowly. You know, that's basically the trade-off. Yeah, yeah, that makes sense. Okay, then, what do you see are the biggest barriers companies face when adopting AI for software testing and how can they overcome them?

I think, first of all, there is sort of an organizational challenge of people being really worried that it's going to overtake whatever role they had. I think it's very important to have an open discussion and show people what will be their role when generative AI comes into the picture. And once people understand that, a lot of the fear and resentment of people

integrating generative AI into the organization is basically put aside. So I think that that's probably the number one challenge in companies. But I think that if companies understand the strategic impact of implementing AI in their organization, there is no way they would actually not go forward and do that. Yeah.

So as an example with software testing, do you see a time then where that as a kind of career or stepping stone in a software engineer's career is obsolete? Or would they always need to have the knowledge of that? It's like learning mathematics and you've got a calculator. Do you know what I mean? Yeah.

Yeah, but still, you know, when you need to figure out some, you know, deep algorithm for something, the calculator is quite useless. You need the human imagination, creativity, and discretion. And it's not different in software testing. At the end of the day, you might be spending much less of your time trying to correct some syntax errors in the

code and try to find the locator id in order to update a test automation code script that has become obsolete but you would be putting much more time into the testing strategy how to test what to test how frequently to test and and these are actually the parts that are the more interesting parts in software testing i see i see and i think you mentioned that you uh

Basically, the AI has turned off its imagination, basically. So is it not able to offer more streamlined corrections to a function or something, part of the code, so that you can do this a lot easier, a lot faster this way?

It's based on the ability to turn off its imagination. It would not work if you're looking for it to do something like DALI and asking it to create pictures and images and be creative. But in the type of work in software testing where you're actually at the end of the day programming against a definite requirement,

There is no point allowing it to imagine things that don't exist. That would be counterproductive. It actually needs to test that whatever should exist exists and whatever doesn't.

doesn't exist, you know, which is not actually required. So you don't want it to imagine. So the ability of generative AI to be creative in some cases is a requirement. But in our domain of expertise of software testing, it's actually not only not a requirement, it's actually not good to have it at all. And that's why we turned it off. I see. I've got you. But then, yes, and then it's up to the human

coders to look at the code and say, right, I've got this idea to replace this section. I don't know, say it reduces the code by 80% or something. And then your AI can test it and say, oh yeah, it does work, it doesn't work. Am I making any sense at all?

Yeah, no, no. So, so definitely, I mean, the, the, uh, generated VI is able to create the code that creates the test automation code. The code that tests the application or website automatically is state of the art. We actually create open source playwright code. So also there is no sort of vendor lock in our stuff. This is open source code and it knows to maintain it. Um, and, and so there is no need actually to touch the code that it creates. Uh,

And the programmer on the other side, the person that creates the application that is being tested, can play around, create new versions, add new features, remove stuff. And all of that is going to be tested seamlessly, you know, on the fly by the AI test engineer, which something interesting.

that today is not happening and not available when you don't have AI and takes a lot of time until a programmer or engineer can actually change things and have them released to the market after they've been tested. So all that part is actually now becoming seamless. Gotcha. Gotcha. That makes sense. Okay then. So Blink.io is a relatively new player in AI driven software testing. What inspired its founding and where do you see it in the next five years?

So the whole area of software development, testing, release, monitoring, all of this

you know, huge part of our life. You know, if you think about how much you're using software today, anything on your, you know, mobile phone, anything on your computer, most of what you watch today on your TV, all that is software, um, you know, um, cars running by software and so forth. So the whole area of software is being significantly, um,

is significantly changing as a result of AI. Software development has become super quick with Copilot, which enables you to create quickly code by programmers. And I think that we are looking at our area of expertise, which is software testing, and looking to do a huge impact in this area with the AI. It would be a bit similar to what...

such as JIRA and other project management software did to the domain of project management, where in the past you'd have a person having a meeting, going through all the tasks and sort of assigning them and planning. Now all that is happening seamlessly in the background. There are programs such as JIRA and others that are doing that. We believe the same would happen to software testing. So there is not going to be sort of this waterfall type of

It's a thing where people are waiting on someone to come in the morning, analyze which tests failed, which tests passed, and correct and assign different tasks to the R&D, DevOps, and testing to correct them. All of that is going to happen seamlessly in the background by Generator VI. It's also going to correct all the errors

updates that are required to the testing side, attribute cases which have to go to R&D and to the DevOps team, to the environment aspects. And all that is going to happen seamlessly. And we believe this is going to make a major change in their software release cycles. And we're actually at the very heart of that change with software testing, implementing generative AI.

Fantastic. If it makes things like games more reliable on release, I'm all for it. Okay then, looking ahead, what do you believe 2025 holds in store for the AI industry? Will there be any breakthroughs or disruptions coming, you believe?

I believe that there is a lot of development happening. Every few months, there is a new release of an LLM engine. These engines are becoming super powerful and capable. And on top of that, as I said, the layer of AI agents is growing constantly. So I think there are going to be much more specialized AI agents, similar to ours in software testing that will be in travel and medicine and you name it.

So that's one trend that is going to happen. The other trend is, I think, that the compliance and management of the usage, both of the data which is aggregated in these LLM engines, as well as the transparency to the user of what type of information is provided and all sorts of possible issues

opinions that the AI might have, which are not totally clean, would all of this sort of compliance and transparency rules will be implemented during 2025, because that is a quite an important need for this industry to keep going happily forward.

Gotcha. Where do you see DeepSeek by the end of 2025? Well, talking about compliance and aggregating information, that's definitely a place where these type of concerns surface up and require a very clear awareness of the users of what is actually happening with what the information and questions they type in.

Absolutely. Absolutely. Right. And that's all my questions, Tal. Is there anything else you'd like to add before we close up? He feels important we haven't touched on.

I think there is a huge opportunity today. And I think that everyone actually in any domain, not just in software testing or programming, but also in medicine and law and admin and operations should embrace generative AI because it's going to bring everyone to be much more productive, meaningful and valuable to their organization. So I wish all of us good luck with that and happy 2025.

Okay, with that then, Tel, thank you for your time. That was very, very interesting. Also, don't forget to subscribe to IE Plus for premium insights and exclusive content.