Moonshot News and Dimitra Letsa have the pleasure of discussing with Mara Pometti, Lead AI strategist at IBM, about humanism, data journalism and algorithms, in an era where both journalists and users not only have to understand Artificial Intelligence but learn how to use it for good causes.
Mara is a brilliant mind, working in the intersection of technology and journalism and has a lot to share about how newsrooms work, or should work better with A.I. and a fascinating approach to a ‘tech-savvy humanism’ and the anthropocentric use of A.I. I met her as a ‘pupil’ at LSE’s A.I. Academy for Small Newsrooms and was very much intrigued by her strategic approach to journalism and use of data. I hope you will enjoy this discussion and find it as interesting as I did!
Hello Mara, super happy and excited to have you on Moonshot. Thank you very much for your time.
Thank you Dimitra, for having me and for this interview. Actually, I’m super excited.
Let’s start a bit by sharing some information about you, who you are, your background…
I work as a Lead AI strategist at IBM, a title that actually I crafted by myself, to really define what I do daily in my job—which is helping clients define and develop a portfolio of AI initiatives by using humans as a lens to frame problems that today can be solved with the help of machine learning and algorithms.
I chart the path towards a new scenario by explaining how that scenario may play out through visual stories that are made of data – as by training, actually, I’m a data journalist. And now I find myself at the intersection of AI, strategy, and data journalism.
I connected these three realms by trying to come up with new approaches to AI that are focused on humans.
I strongly believe that today, it’s important to bring a human-centered approach into the design and development of AI in order to come up with new solutions that can augment people’s life.
How data can create visual stories
I am fascinated that you have studied journalism. Hearing how you work with AI and design, one would think that you’re a developer or an engineer. What is it that attracts you to artificial intelligence in relation to journalism? And what is the skillset that you had to add to the journalistic education in order to go so deep into AI?
Let me start from the skills that you have to add to a journalism background, which is data skills: coding mainly, programming languages for data analysis to be able to turn a vast amount of data into meaningful stories. Then through design, you find ways to visualize the data and give shape to the stories that are trapped in the tangled network of the data, so as to create visual information that easily flows into people’s eyes.
Programming languages are tools that make it possible to ask questions to the data and extract the answers that you are looking for, like you would by interviewing people
By doing this, you are essentially applying the journalism approach to another medium, which is the data. So these are the new skills that you would need in order to work in the data journalism space: journalists (but actually all of us!) need to equip themselves with new tools to make sense of the complex reality we live in. We cannot fully grasp the events happening around us if we cannot read and interpret data and algorithms
It’s not just only the data, but it’s also the context that surrounds the data that matters.
That was a fascinating parallelism. I have never thought of that. But as you say, you ask the data what we would ask a person. I think that is a very insightful and simple way of making people understand what is AI in journalism. What do you think is super important for AI principles and bias in regards to journalism?
I don’t think there are principles exclusively for journalism. The problem of bias, unfortunately, it’s something that is spread across many industries, not just journalism. Actually, I have to say that if today we are talking about biases in AI, it is thanks to the effort of many journalists, starting from ProPublica, for example.
This no-profit investigative newsroom was the first to realize that there was something wrong with the algorithms governing people’s life. This technology was supposed to help people but instead, they were basically discriminating against minorities. I think this goes back to how we approach problems in AI. Unfortunately, too often, we see a merely technical application of these tools which is not supported by any human critical thinking, meaning, or strategy.
We look for the most performing algorithms, the ones that usually just came out from research, that we can adopt for our projects, and often we are so blinded by the power of it that we don’t think enough about the problem we are trying to solve and the hypotheses to formulate based on that. We don’t think that the machine learning model is going to eventually impact a human life. Most important, we need to create guardrails over a strategic thinking process to prevent biases in the algorithms so as to mitigate risks ahead. Unfortunately, biases are intrinsic in the data that we use, because data is a human product and we, as human beings, have been producing biased knowledge since forever.
We construct, we build, we generate, and produce information, and we store that information. That information brings with itself a number of biases that are related to the human history, which we know is made of discrimination, injustice, and… a lot of biases.
So if we apply a humanistic approach to AI, we are able to anticipate or at least contain risks in the effort to put people first. And then we can think about the solution design, and the technologies to use.
What and how biases end up in the data
I really like us to stress that, because what you are basically saying is that we should take it for granted that there are biases and treat AI as a product of those and treat them ahead. Instead of being surprised at the end that it’s a biased algorithm, or it has problems because of incorporated bias in the data. It is a reflection of the human nature. But so far, people have treated the biases as a result of the AI, which is the other way around, as you very correctly said. It’s originated from the data.
Yeah. Actually, biases can happen at any stage of the AI lifecycle and there are different techniques to mitigate them. There are pre-processing techniques—to detect biases that appear in the data used to train algorithms, that is biases inherent in the data, and we have in-processing and post-processing techniques—that address biases that might creep into an algorithm while it processes the data or when it delivers the final output. So it might happen that there are different types of biases. Surely, there are also different practices today to avoid that.
But what we normally accuse AI of is the pre-processing bias. When there are algorithms discriminating against a pool of people in a population most of the times it is because of the data used to train that algorithm. What is your biggest hope or dream that you could achieve with journalism in regards to AI? And as a project or as a product?
Let’s frame it as a project: I would really love to see the application of AI, specifically, natural language processing techniques, broadly used among newsrooms to find and deliver stories.
Today, we are full of textual data that is being stored, and yet, very few people look at it and know how to use it. Almost no one is looking at textual data from a journalistic perspective. I think we are missing a huge opportunity to really uncover meaningful stories that have never been told, to shed light on complex issues regarding laws, regulations, politics, discrimination, and many many others.
The analysis of our words can return a different perspective on things. I would love to see more. AI-driven newsrooms where you do not use AI just to optimize the operations of the newsroom but where journalists use it as a tool to come up with stories.
I see too little of that today. I see a lot of data journalism in the mainstream sense, meaning, analyzing data, and stats, and designing charts to drive stories. But when I talk to journalists and former colleagues, I don’t hear about doing journalism with algorithms, unless in very few cases, such as in the R&D department of the New York Times, ProPublica, or The Markup. So it’s still too niche, but the potential is so big. That’s what I would like to see, or, who knows, accomplish one day in a newsroom.
Journalistic skills and A.I.
That would be fantastic. You talk a lot with journalists. You even do some training journalists about AI. What is your feeling about journalists regarding AI? I know that the younger generations are much more familiar with it. But do they see it as a tool, as a threat? Do you see an improvement in how the technical skills are evolving?
AI is too much perceived as a threat. This makes me sad because eventually, this is based on very little knowledge of this tool, which is complex. And that is because of an old rhetoric that has been focused mostly only on the bad side of AI.
We are losing the whole picture of the problem. I believe we really need to change the story around AI, and how we’re talking about it. Because, in the end, it’s just a technology. We haven’t experienced so bad news and so negative stories with smartphones or computers. AI is not more harmful if used properly.
I mean, all technologies can be harmful. It’s really about the people being aware of that and knowing how to use it. We as experts have failed in popularizing AImeeaning, making people understand how AI works and how AI impacts their life.
If we are able to come up with better information about it, and stories, people will learn how not to be controlled by algorithms and will start seeing them for what they are: tools that can help in accomplishing tasks or processes in their daily life.
Yeah. And it’s a huge upscaling opportunity for many journalists. But unfortunately, both the newsrooms and, I would say, the journalistic schools are very slow in adapting to that change. I don’t think that you have failed, because it’s a very new thing. And changes, unfortunately, always take time. The educational part is a huge opportunity that the community should take advantage of.
Exactly. I think the biggest problem is education. People of my age, or even younger, when think about journalism, they think only about writing or shooting videos, or photojournalism. They don’t see data as a medium to come up with stories. I think it’s a problem in how our culture perceives the way of doing journalism.
There is much more that journalists can do and much more industries where journalists can be useful. Like me, for example, I work in a tech company. I started working in newsrooms, but eventually, I ended up in the tech industry, hired as a data journalist, indeed.
We should really change our mindset regarding what journalism means and how we do journalism. Because eventually, journalism is a service that we do for society, or for other people, to let them better understand information, by better accessing that information.
Unfortunately, that’s my experience as well. When a journalist is very tech-savvy, they end up with a tech company, rather than in a newsroom…
Well, I think there is a much bigger need for tech savvy journalists than the other way around. So we would have to see a change there. It’s good if tech companies get more journalists, but it’s also good if the newsrooms keep tech savvy journalists to work with them.
How tech-savvy humanists can find their place
What are the biggest challenges that you yourself have faced?
In terms of being a tech-savvy humanist— as I like to call myself —actually, at the beginning was hard. That was because there is no culture regarding my job in the enterprise, it still needed to be shaped. I mean, there are no specific directions about what a data journalist should do in business, at least, I did not get any idea of what my job would have entailed. I was hired by IBM as part of this very new team of data scientists where the architect of the team—actually, my former boss— had the foresight to envision a multidisciplinary team made of, of course, data scientists, but also of a data journalist, for the need to explain the stories produced by data and algorithms.
It was not easy to come up with new methods, ideas, and strategies to make business stakeholders understand the impact of the AI on their organization. It was very to build a new mindset from scratch and make my colleagues see what someone whit my skills in data journalism could do in data science.
It was about telling them my story over and over again, by supporting that story with work, projects, and innovative ideas. By experimenting with new visuals made with the predictions of machine learning models, and developing these new methods and experiences to better address the connection between people and AI led them to see eventually the value of data stories—which are not dashboards!
It’s really a completely new concept of bringing data to life, where data becomes the language of your story. And the people interact and see in an easy way how the results of ML or optimization models can become a story that people can read, and visualize.
Which newsrooms have mastered A.I.?
Now that you talk about it, questions pop up. What is the publisher that you like most? What do they do with AI? You at least feel that they are on the right path? And you are a bit jealous and you would like to be part of that project?
I really love the R&D team of the New York Times. I think they are far ahead of everyone in that space. Even Reuters, honestly, I have to admit has a very good AI lab.
That’s great. Briefly, when it comes to AI and tech, do you think that a publisher can do this work by themselves, or it’s best facilitated with a platform strategic partner as well?
Given the complexity of the tools that are needed, I think, a strategic partnership makes sense. Today, I was talking with a former colleague—a data journalist, actually—about this matter. It makes sense to collaborate with business partners because you cannot afford anymore to build and maintain your own tools. It’s impossible for a newsroom to really keep the cost of what it’s required to run AI models and data infrastructure.
So it makes to work across the industry. There will be, for sure, also changes in the current business model of newsrooms.
Anthropocentric use of A.I.
I fully agree with that. I don’t think everybody is possible, or should even try to reinvent the bigger or smaller wheels. With such a partnership, also smaller rooms can do magic. So I fully agree with what you say.
You have very clear and strong opinions about AI, very structured and not academic per se. How do you approach the passing of knowledge from you to other people?
That’s a very good question. Because it connects to what I was saying about popularizing the concepts of AI. So yes, I am writing a book that is expected in June. I already published a book four years ago, about the art of communicating with data and our journey from information design to data journalism. But this new book that I am writing will be all about my vision of what a holistic human-centered approach to AI should look like.
Something that I call the anthropocentric view of AI. So I hope with that book to really make many complex concepts around AI, more pop, while getting AI closer to people. I have a pop soul, even though I ground my studies, as I said, in papers, academic studies and research. I hope to bring to people the understanding of how AI is subtly governing our life, and talk about how can we prevent that from happing through a humanistic vision. An anthropocentric view of AI, where humans are at the center of the AI experience, means taking into consideration humans not just from the point of the action, as users, but also from the feeling, and thinking.
This comprehensive humanistic vision is part of who I am, in fact, my background is in Classics. The studies in the humanities are a part of my thinking and methods, and will always be. In a sense, I’m coming back to my roots o to apply my humanistic values in today’s AI world.
I think it’s fascinating. I am really eager to read this. I think it’s exactly what we need: more anthropocentric scientists who approach technology and humanity at the same time, and not one against each other. I think education is also for the user to best use AI, and not only for the scientists. Hopefully, it opened the road for a better life, and a better understanding of the product, and better journalism. Because we need facts, we need data, and we need democracy in times like this.
Thank you so much, Mara. You are a fascinating person to talk to. I do hope we will hear or read more of you on Moonshot News, especially upon the release of your book. Thank you very much, once again for taking the time to speak to us.
Thank you so much. Thank you for everything, Dimitra, really.