Synthetic intelligence is on all people’s lips lately, sparking pleasure, worry and infinite debates. Is it a drive for good or dangerous – or a drive we even have but to totally perceive? We sat down with outstanding pc scientist and AI researcher Mária Bieliková to debate these and different urgent points surrounding AI, its influence on humanity, and broader moral dilemmas and questions of belief it raises.
Congratulations on turning into the most recent laureate of the ESET Science Award. How does it really feel to win the award?
I really feel immense gratitude and happiness. Receiving the award from Emmanuelle Charpentier herself was an unbelievable expertise, full of intense feelings. This award does not simply belong to me – it belongs to all of the exceptional individuals who accompanied me on this journey. I imagine they have been all equally thrilled. In IT and in addition in applied sciences usually, outcomes are achieved by groups, not people.
I am delighted that that is the primary time the primary class of the award has gone to the sphere of IT and AI. 2024 was additionally the primary 12 months the Nobel Prize was awarded for progress in AI. In reality, there have been 4 Nobel Prizes for AI-related innovations – two in Physics for Machine Studying of Neural Networks and two in Chemistry for coaching Deep Neural Networks that predict protein constructions.
And naturally, I really feel immense delight for the Kempelen Institute of Clever Applied sciences, which was established 4 years in the past and now holds a steady place within the AI ecosystem of Central Europe.
A number one Slovak pc scientist, Mária Bieliková has carried out intensive analysis in human-computer interplay evaluation, person modelling and personalization. Her work additionally extends to the information evaluation and modelling of delinquent habits on the net, and he or she’s a outstanding voice within the public discourse about reliable AI, the unfold of disinformation, and the way AI can be utilized to fight the difficulty. She additionally co-founded and at present heads up the Kempelen Institute of Clever Applied sciences (KInIT), the place ESET acts as a mentor and companion. Ms. Bieliková not too long ago received the Excellent Scientist in Slovakia class of the ESET Science Award.
Writer and historian Yuval Noah Harari has made the pithy commentary that for the primary time in human historical past, nobody is aware of what the world will seem like in 20 years or what to show in faculties as we speak. As somebody deeply concerned in AI analysis, how do you envision the world 20 years from now, notably by way of expertise and AI? What are the talents and competencies that can as soon as be important for as we speak’s youngsters?
The world has all the time been tough, unsure, and ambiguous. Right now, expertise accelerates these challenges in ways in which individuals wrestle to handle in actual time, making it arduous to foresee the results. AI not solely helps us automate our actions and change people in varied fields, but additionally create new constructions and artificial organisms, which might probably trigger new pandemics.
Even when we didn’t anticipate such eventualities, expertise is consciously or unconsciously used to divide teams and societies. It is now not simply digital viruses aiming to paralyze infrastructure or achieve assets; it is a direct manipulation of human pondering by way of propaganda unfold on the velocity of sunshine and magnitude we could not have imagined a couple of a long time in the past.
I don’t know what sort of society we are going to reside in 20 years from now or how the foundations of humanity will change. It would take longer, however we’d even be capable to alter our meritocratic system, at present based mostly on the analysis of data, in a manner that doesn’t divide society. Maybe we’ll change the way in which we deal with knowledge as soon as we notice we will not absolutely belief our senses.
I’m satisfied that even our youngsters will more and more deviate from the necessity for information and evaluating success in varied checks, together with IQ checks. Information will stay vital, but it surely should be information that we are able to apply. What is going to really matter is the vitality persons are keen to put money into doing significant issues. That is true as we speak, however we frequently underutilize this attitude when discussing schooling. We nonetheless consider cognitive abilities and information regardless of figuring out these competencies alone are inadequate in the true world as we speak.
I imagine that as expertise advances, our want for robust communities and the event of social and emotional abilities will solely develop.
As AI continues to advance, it challenges long-standing philosophical concepts about what it means to be human. Do you assume René Descartes’ commentary about human exceptionalism, “I believe, due to this fact I’m”, will have to be re-evaluated in an period the place machines can “assume”? How far do you imagine we’re from AI methods that may push us to redefine human consciousness and intelligence?
AI methods, particularly the big basis fashions, are revolutionizing the way in which AI is utilized in society. They’re frequently bettering. Earlier than the top of 2024, OpenAI introduced new fashions, O3 and O3mini, which achieved vital developments in all checks, together with the ARC-AGI benchmark that measures AI’s effectivity in buying abilities for unknown duties.
From this, one may assume that we’re near reaching Synthetic Common Intelligence (AGI). Personally, I imagine we’re not fairly there with present expertise. Now we have wonderful methods that may help in programming sure duties, reply quite a few questions, and in lots of checks, they carry out higher than people. Nevertheless, they don’t really perceive what they’re doing. Subsequently, we can not but speak about real pondering, although some reasoning behind job decision is already being finished by machines.
Simply as we perceive phrases like intelligence and consciousness as we speak, we are able to say that AI possesses a sure stage of intelligence – that means it has the power to resolve complicated issues. Nevertheless, as of now, it lacks consciousness. Primarily based on the way it capabilities, AI doesn’t have the aptitude to really feel and use feelings within the duties it’s given. Whether or not this can ever change, or if our understanding of those ideas will evolve, is tough to foretell.
Mária Bieliková receiving the ESET Science Award from the arms of Nobel Prize laureate Emmanuelle Charpentier
The notion that “to create is human” is being more and more questioned as AI methods turn into able to producing artwork, music, and literature. In your view, how does the rise of generative AI influence the human expertise of creativity? Does it improve or diminish our sense of id and uniqueness as creators?
Right now, we witness many debates on creativity and AI. Folks devise varied checks to showcase how far AI has come and the place these AI methods or fashions surpass human capabilities. AI can generate pictures, music, and literature, a few of which could possibly be thought-about artistic, however definitely not in the identical manner as human creativity.
AI methods can and do create unique artifacts. Though they generate them from pre-existing supplies, we might nonetheless discover some really new creations. However that is not the one vital facet. Why do individuals create artwork, and why do individuals watch, learn, and hearken to artwork? At its essence, artwork helps individuals discover and strengthen relationships with each other.
Artwork is an inseparable a part of our lives; with out it, our society can be very completely different. That is why we are able to recognize AI-generated music or work – AI was created by people. Nevertheless, I don’t imagine AI-generated artwork would fulfill us long-term to the identical extent as actual artwork created by people, or by people with the assist of expertise.
Simply as we develop applied sciences, we additionally search causes to reside and to reside meaningfully. We’d reside in a meritocracy the place we attempt to measure all the things, however what brings us nearer collectively and characterizes us are tales. Sure, we might generate these too, however I’m speaking concerning the tales that we reside.
AI analysis has seen fluctuations in progress over the a long time, however the current tempo of development – particularly in machine studying and generative AI – has stunned even many consultants. How briskly is simply too quick? Do you assume this speedy progress is sustainable and even fascinating? Ought to we decelerate AI innovation to raised perceive its societal impacts, or does slowing down danger stifling helpful breakthroughs?
The velocity at which new fashions are rising and bettering is unprecedented. That is largely because of the manner our world capabilities as we speak – an enormous focus of wealth in non-public firms and sure elements of the world, in addition to a worldwide race in a number of fields. AI is a major a part of these races.
To some extent, progress will depend on the exhaustion of as we speak’s expertise and the event of recent approaches. How a lot can we enhance present fashions with identified strategies? To what extent will large firms share new approaches? Given the excessive value of coaching giant fashions, will we simply be observers of bettering black bins?
At current, there isn’t a stability between the methods humanity can create and our understanding of their results on our lives. Slowing down, given how our society works, shouldn’t be potential, in my view, and not using a paradigm shift.
That is why it’s essential to allocate assets and vitality to analysis the results of those methods and to review the fashions themselves, not simply by way of standardized checks as their creators do. For instance, on the Kempelen Institute, we analysis the abilities and willingness of fashions to generate disinformation. Not too long ago, we have now additionally been trying into the era of personalised disinformation.
There’s a whole lot of pleasure round AI’s potential to resolve world challenges – from healthcare to local weather change. The place do you imagine the promise of AI is best by way of sensible and moral functions? Can AI be the “technological repair” for a few of humanity’s most urgent points, or will we danger overestimating its capabilities?
AI will help us deal with essentially the most urgent points whereas concurrently creating new ones. The world is stuffed with paradoxes, and with AI, we see this at each flip. AI has been helpful in varied fields. Healthcare is one such space the place, with out AI, some progress – for instance, in growing new drugs – wouldn’t be potential, or we must wait for much longer. AlphaFold, which predicts the construction of proteins, has huge potential and has been used for years now.
Alternatively, AI additionally allows the creation of artificial organisms, which could be helpful but additionally pose dangers comparable to pandemics or different unexpected conditions.
AI assists in spreading disinformation and manipulating individuals’s ideas on points like local weather change, whereas on the similar time, it will possibly assist individuals perceive that local weather change is actual. AI fashions can exhibit the potential penalties for our planet if we proceed on our present path. That is essential, as individuals are likely to focus solely on short-term challenges and sometimes underestimate the seriousness of the state of affairs until it immediately impacts them.
Nevertheless, AI can solely assist us to the extent that we, as people, permit it to. That is the most important problem. Since AI does not perceive what it produces, it has no intentions. However individuals do.
Picture credit score: © Miro Nota
With nice potential additionally come vital dangers. Distinguished figures in tech and AI have expressed issues about AI turning into an existential menace to humanity. How do you assume we are able to stability accountable AI growth with the necessity to push boundaries, all whereas avoiding alarmism?
As I discussed earlier than, the paradoxes we witness with AI are immense, elevating questions for which we have now no solutions. They pose vital dangers. It is fascinating to discover the probabilities and bounds of expertise, however then again, we’re not prepared – as people, nor as a society – for this sort of automation of our abilities.
We have to make investments a minimum of as a lot in researching the technological influence on individuals, their pondering, and their functioning as we do within the applied sciences themselves. We want multidisciplinary groups to collectively discover the probabilities of expertise and their influence on humanity.
It is as if we have been making a product with out caring concerning the worth it brings to the buyer, who can buy it, and why. If we didn’t have a vendor, we would not promote a lot. The state of affairs with AI is extra severe, although. Now we have use circumstances, merchandise, and individuals who need them, however as a society, we don’t absolutely perceive what’s occurring once we use them. And maybe most individuals do not even wish to know.
In as we speak’s world world, we can not cease progress, nor can we sluggish it down. It solely slows once we are saturated with outcomes and discover it arduous to enhance, or once we run out of assets, as coaching giant AI fashions could be very costly. That’s the reason their finest safety is researching their influence from the start of their growth and creating boundaries for his or her use. Everyone knows that it’s prohibited to drink alcohol earlier than the age of 18, or 21 in some nations, but usually with out hesitation, we permit youngsters to talk with AI methods, which they’ll simply liken to people and belief implicitly with out understanding the content material.
Belief in AI is a serious matter globally, with attitudes towards AI methods various broadly between cultures and areas. How can the AI analysis group assist foster belief in AI applied sciences and make sure that they’re seen as helpful and reliable throughout various societies?
As I used to be saying, multidisciplinary analysis is crucial not just for discovering new prospects and bettering AI applied sciences but additionally for evaluating their abilities, how we understand them, and their influence on people and society.
The rise of deep neural networks is altering the scientific strategies of AI and IT. Now we have synthetic methods the place the core ideas are identified, however by way of scaling, they’ll develop abilities that we can not all the time clarify. As scientists and engineers, we devise methods to make sure the mandatory accuracy in particular conditions by combining varied processes. Nevertheless, there’s nonetheless a lot we do not perceive, and we can not absolutely consider the properties of those fashions.
Such analysis doesn’t produce direct worth, which makes it difficult to garner voluntary assist from the non-public sector on a bigger scale. That is the place the non-public and public sectors can collaborate for the way forward for all of us.
AI regulation has struggled to maintain up with the sphere’s speedy developments, and but, as somebody who advocates for AI ethics and transparency, you’ve possible thought-about the function of regulation in shaping the longer term. How do you see AI researchers contributing to insurance policies and rules that guarantee the moral and accountable growth of AI methods? Ought to they play a extra energetic function in policymaking?
Fascinated about ethics in analysis is essential, not solely in analysis but additionally within the growth of merchandise. Nevertheless, it may be fairly costly as a result of it will be important that an actual want arises on the stage of essential plenty. We nonetheless have to contemplate the dilemma of recent information acquisition versus the potential interference with the autonomy or privateness of people.
I’m satisfied {that a} good decision is feasible. The query of ethics and credibility should be an integral a part of the event of any product or analysis from the start. On the Kempelen Institute, we have now consultants on ethics and rules who assist not solely researchers but additionally firms in evaluating the dangers related to the ethics and credibility of their merchandise.
We see that each one of us have gotten extra delicate. Philosophers and legal professionals take into consideration the applied sciences and provide options that don’t eradicate the dangers, whereas scientists and engineers are asking themselves questions they hadn’t thought-about earlier than.
On the whole, there are nonetheless too few of those actions. Our society evaluates outcomes based totally on the variety of scientific papers produced, leaving little room for coverage advocacy. This makes it much more essential to create area for it. In recent times, in sure circles, comparable to pure language processing or recommender system communities, it has turn into customary for scientific papers to incorporate opinions on ethics as a part of the evaluation course of.
As AI researchers work towards innovation, they’re usually confronted with moral dilemmas. Have you ever encountered challenges in balancing the moral imperatives of AI growth with the necessity for scientific progress? How do you navigate these tensions, notably in your work on personalised AI methods and knowledge privateness?
On the Kempelen Institute, it has been helpful to have philosophers and legal professionals concerned from the very starting, serving to us navigate these dilemmas. Now we have an ethics board, and variety of opinions is one in every of our core values.
For sure, it’s not straightforward. I notably discover it problematic once we wish to translate analysis outcomes into observe and encounter points with the information the mannequin was educated on. On this regard, it’s essential to make sure transparency from the outset, so we cannot solely write a scientific paper but additionally assist firms innovate their merchandise.
Given your collaboration with giant expertise firms and organizations, comparable to ESET, how vital do you assume it’s for these firms to guide by instance in selling moral AI, inclusivity, and sustainability? What function do you assume firms ought to play in shaping a future the place AI is aligned with societal values?
The Kempelen Institute was established based mostly on the collaboration of people with robust educational backgrounds and visionaries from a number of giant and medium-sized firms. The thought is that shaping a future the place AI aligns with societal values can’t be realized by only one group. Now we have to attach and search synergies wherever potential.
For that cause, in 2024, we organized the primary version of the AI Awards, targeted on Reliable AI. This occasion culminated on the Forbes Enterprise Fest, the place we introduced the laureate of the award – AI:Dental, a startup. In 2025 we’re efficiently persevering with the AI Awards and have acquired extra and better high quality functions.
We began discussing the subject of AI and disinformation nearly 10 years in the past. Again then, it was extra educational, however even then, we witnessed some malicious disinformation, particularly associated to human well being. We had no thought of the immense affect this matter would finally have on the world. And it is solely one in every of many urgent points.
I worry that the general public sector alone has no probability of tackling these points with out the assistance of huge firms, particularly as we speak when AI is being utilized by politicians to realize reputation. I contemplate the subject of trustworthiness in expertise, notably AI, to be as vital as different key subjects in CSR. Supporting analysis on the options of AI fashions and their influence on people is prime for sustainable progress and high quality life.
Thanks in your time!