Ian Makgill in Spotlight for STELAR: Procurement & Transparency
In this episode of the STELAR project’s Data Stories 360° podcast, Julia Mitrovic, Deputy Head of Communications at Foodscale Hub, speaks with Ian Makgill, a seasoned business technology professional with over 25 years of expertise in public procurement and supply chains.
Ian works to improve the global procurement market by helping governments unlock and utilise procurement data for better outcomes. This time, he discusses his work with Spend Network and the enRichMyData project, exploring the challenges and opportunities in procurement data.
We have prepared an extensive set of questions for Ian to help bring the topic of public procurement and data spaces closer to our audience. The STELAR project has a very diverse target audience, and we really want to help this broad group better understand these topics.
Exploring Our Guest's Motivation and Background
Can you tell us a bit about your career journey and what led you to founding Spend Network?
Like all good things, it started with failure. In 2008, I was running a management consultancy that mainly helped financial services companies work with the government, and government work with financial services companies. We were doing analysis, research, and providing advice. Of course, the financial crisis hit, and we went from being an interesting company that provided good services to suddenly having all of our contracts cut virtually overnight.
I thought to myself, “I really don’t want to go through that again.” So, I decided to teach myself how to program and build databases, and I started gathering data on how governments spend money.
It took a few months of teaching myself how to program in the evenings, while we kept the consultancy going. We were still operating the same company, but we reinvented ourselves as a data company. It took some time, but I was moderately successful. Successful enough to gather the data properly and make insights through analysis. That’s how we got started – out of necessity and a dislike of day rates.
You have a diverse background - how have your experiences shaped your current role and perspective on data-driven innovation?
This is one of those questions where you could answer in different ways. I went to art school and had no business being in a data-driven environment. I struggled with math and was not particularly good at it. But once I started working, I learned a lot. I was always interested in computers and graphic design, and I found the idea of telling a machine how to do things and what data meant really interesting.
It doesn’t really matter that I came to it in a fairly indirect way; I think that was more powerful because it allowed me to think, “I don’t have to absorb all of this dogmatic theory about how data should work, when to use it, or how to use it. I just need to know how to make spreadsheets perform and get a basic understanding of statistics, and I can do the rest myself. I can learn the rest on my own.”
I think that inability to accept the norm played a big part. No one would have taken on this job if they knew, “I’m going to collect every tender in the world.” Brilliant idea, right? But if you knew about data, especially public procurement data, you wouldn’t have done it. My ignorance was actually a really powerful motivator to build something better because I didn’t understand how difficult it would be. I think 90% of people, if they really knew what we were doing, would just say, “Don’t bother, it’s a nightmare.” But we just blended into it and had enough success.

What motivates you on a personal level to continue working in the public procurement and data space?
One of the reasons I became interested in procurement is because, if you believe that government and the broader public sector play a fundamental role in maintaining and advancing society – which I do – you will quickly realise that much of that work is dictated by procurement. We develop enormous strategies and policies, instruct organisations to implement them within their budgets, and then they turn to procurement. Procurement becomes the largest leverage point for delivering government work and shaping society.
Governments spend $13 trillion a year with their suppliers, not including general ledger (GL) spending. If I can make even 1% of 1% of that better, I would have made a huge impact on society as a whole. I’m not saying I can, but it’s a nice ambition.
Insights from enRichMyData and Public Procurement
Spend Network is part of the enRichMyData project. Can you share what your role in this project involves? With your vast experience in structuring unstructured, semi-structured, and structured data, how has that expertise been applied in the enRichMyData project?
We are basically part of the team that is providing a business case, and we are working with the data that exists around the names of buyers, primarily for Europe. However, we are also looking globally to see what we can do to reconcile the name of a given buyer to something that is equivalent to a register of public entities.
In simple terms, the data about who buys what is terrible. Each entry could have dozens of different cross-references or similar variations. Our goal is to bring order to this data. Enriching the data helps us build a process where we can identify whether the entity is a university, a health institution, an education institution, or a city council.
Once we’ve categorised this, we also bring in the structures that allow us to categorise the data by region. For instance, we can search for all opportunities in the southwest of France or all opportunities not just in Denmark, but in a specific area or town. We are growing this effort to systematically do this on a global scale.

How do you think the enRichMyData project can help bridge the gap between public and private sectors, especially when it comes to data sharing and procurement transparency?
Procurement is one of those strange problems where everyone wants the data to be better, as a principle. However, the path to improving it is really hard, because you have to persuade dozens of individuals in hundreds of thousands of organisations to care about one tiny bit of administrative work sufficiently to make the data better. But in reality, it helps everyone if the data is improved, because then people can say, “Okay, we’ve got a problem here” or “Oh, that’s a really good tender, I can take that.”
The first thing we are doing is making that data accessible. For example, we can now find all the tenders for robotic arms in universities. By doing that, it makes it easier for people to bid on robotic arms and adjust them accordingly. It also makes it easier for universities to look at who else has been buying robotic arms and what their experiences have been.
By bringing all that together, we create a more competitive and transparent environment. Ideally, this leads to a more efficient delivery of public funds in the long run.
The project aims to streamline procurement processes and overcome data challenges. What do you see as the biggest data obstacles in public procurement today?
It is the data quality because, often, we see the issue where you can publish, for example, in the UK, attachments to a tender on one of our portals, but there is no detail about what is in those documents. It could be a notice on how to use the web portal, it could be a contract, it could be a specification, plans, or anything, and it’s just known as an attachment. This means that trying to understand what everything is and trying to make sense of it using LLMs is difficult.
We are working on enhancing the data, but it’s really hard work because someone did poor data design, and no one cared about the data quality. On a quick side note, we’re really bad at buying data services. We’re getting better at buying websites, and then we hand them over, hoping that the tool will be great and people can use it.
But then, they develop data structures that are absolutely incoherent and useless. We’ve got a nice shiny website, but the data underneath is a complete mess. I think we have to recognise that there’s a job for agencies commissioned to make websites to own the quality of the data as well as the frontend.
Both STELAR and enRichMyData are funded under the same call. The STELAR project is all about improving data flow and sharing for better decision-making in agriculture. How can your work in public procurement data relate to improving data systems for agriculture or other sectors?
At a very basic level, there are organisations such as a Ministry of Agriculture in most countries across Europe, and they buy goods and services related to their remit. The question is, can we make them more efficient? Can we help them understand the importance of high-quality data and good procurement practices? The answer is probably yes. At this fundamental level, the distribution of contracts by Ministries of Agriculture and local regional ministries makes sense. Similarly, areas like transport, ensuring goods can move around the country, are equally important.
However, I think there’s another part to this, which is really interesting. In agriculture, we understand that the process of building shared data structures, where many people are contributing to providing data that is used in a centralised or aggregate form, is a significant but typically challenging task. Procurement is a good example of how not to approach this. What we’ve done is focus on procurement as a legal document, and all we’ve been doing is digitising these legal documents. The period we’re living through now is when governments are realising that digitising documents is not the same as building great databases.
Take grant making, distribution of licenses, land value transactions, or anything else – building great databases is absolutely essential for the success of the agricultural sector. If I can prove anything through procurement, it’s the importance of building a great database.
What lessons from Spend Network’s work in public procurement data can be applied to agricultural data management or even data sharing in other fields?
There is a process by which we seek to drive, particularly on an individual farm basis, through some sort of bureaucracy, often related to grant management. We drive farmers to complete work or applications for grants in a way that is focused on preventing fraud.
However, we do not do enough to gather data that can be used in such a way as to identify and prevent fraud later on. I am genuinely a believer that we should make it easier for applicants to engage and create great data. By making it easier for those creating the data – who are submitting applications – and building better databases, we can actually do much more on preventing fraud.
The reason we focus on prevention, not just detection, is because if I have great data, I can say, “Hold on, these 15 farms, all in the same area, submitted their applications within three minutes of each other – that’s unusual. Why?” I can only do that when my data is really good, and I understand things like proximity and timing. If I can identify that, I might spot a cartel or an attempt to defraud us. One of the biggest challenges in procurement is that we cannot spot cartels because the data is not good enough.
What we are trying to do is prevent fraud. But the best approach is to make it easier to create the data and then spot these activities. We do not know what we are trying to look for until we have all the data. That is one area I think could really be improved.

The Future of Global Data-Sharing Ecosystems
With so much focus on data-driven solutions, what do you think are the next big steps for transforming data in public procurement, agriculture, and beyond?
Like everyone else, I am going to say AI. That’s not a surprise, but I take a very different view of AI. I recently wrote a post on LinkedIn about how often we fail to get an AI project over the line. In fact, the vast majority of AI projects that we start fail. Interestingly, the one we are doing with enRichMyData is a success, and we are putting that into production now. And, you know, I will also say this: the reason it’s a success is because I have been able to work with some of the best people in Europe to deliver that success. They’ve said, “You don’t do that, do it like this,” and it’s been fantastic.
European success is coming out of these projects, and that’s one thing I would just put to one side. But AI gives us the ability to pass data, evaluate data, spot errors in data, and triple-check data at scale without human intervention. Matching and algorithmic matching of entities can be done at a much greater scale with far less human involvement, without the cost that would normally be associated with such tasks. I am a great believer in that.
However, I am not a great believer in public authorities, or anyone else, setting up a chatbot and then saying, “Right, that’s my job done. I can lay off 15 people in my grant administration or procurement team.” I think that’s a terrible idea. It’s a really interesting period, and there are so many opportunities, it’s hard to quantify them right now.
Finally, how do you see the intersection of projects like enRichMyData and STELAR shaping the future of global data-sharing ecosystems?
The expectation of delivery on AI is misplaced. Moving something that is a technology we’re just beginning to get to grips with and turning it into a production success is incredibly challenging. If you’re getting more than a 3% success rate, you’re doing really well. I think that’s growing now because the models are getting better. In the latest rounds for AI projects, you’ll probably be getting closer to a 7 or 8% success rate, maybe even 10%, which is incredible. That’s the first thing I would say: we need to recalibrate how hard this is to do.
The second thing is that what happens, especially in our industry, but it could also be the same for agriculture, is that the interaction between the state and, in our case, the supplier or, in your case, the landowner, can completely flip.
The state can use models to research the activities of a landowner, pull down records from the land registry, look at yields, look at architecture, whatever it is, and make a decision about eligibility for grants based on pre-existing data. Then they can say, “We’ve assessed all of your data. This is what we think is right for your grant. You can appeal if you don’t like it,” rather than pushing people into a process where they have to endlessly submit, “This is what I do, this is where I live.”
We could manage by exception rather than by the rule. So, these are the sorts of things that are shifting. The dynamic is changing, whereby the way we gather data and inject it into the state is completely transforming. I think this is a really interesting way of looking at how data sharing can start to happen tacitly, rather than deliberately.
Conclusion
This conversation highlighted the challenges of implementing AI in real-world applications, emphasising the need for realistic expectations, improved data-sharing practices, and a shift towards managing by exception rather than rigid processes to enhance efficiency in both public procurement and agriculture.
Follow our YouTube channel and explore our Blogs for more insights on data advancements, AI applications, and the future of data sharing!