An Enhanced Approach to Live News Monitoring at NDUS
Thursday, January 8, 2026
by Nathon Lakoduk
In my time with the NDUS so far, I have had the opportunity to work on developing the NDUS News dashboard, and the backend processes that power it. This project has helped me develop my technical skills in many ways. This included my proficiency with Python and its many libraries, analytics skills like cleaning data and creating clean visuals, and data engineering skills like data ingestion, pipelines, and medallion architecture. In addition to those skills, I also developed my soft skills, such as problem solving, requirements gathering, and communication with stakeholders.
The development process for the NDUS News dashboard started as an idea to replace the previous process for the NDUS internal news email. This involved manually selecting and compiling articles in an email, which was sent out three times a week and took several hours. At first, we were unsure how we could automate the article selection process, but we experimented with a few different ideas and Python libraries like BeautifulSoup. After further testing, we decided to use a Python library called SerpAPI, which fit what we were looking for, as it easily accesses search results and metadata for each article that would be processed. We were able to get a test version working soon after, began running a daily workflow for this process and analyzing the results while making tweaks when needed. This was the first step of this project, completing the bronze layer of our data project.
Medallion architecture, simply defined, is data organized in a Lakehouse, with three layers: bronze, silver, and gold. The bronze layer is the data at its most raw, after that the data becomes increasingly cleaned/processed in the silver layer and finally reaches a production and analysis ready form in the gold layer. For this project, the bronze layer involved the steps that we had completed thus far. Next was to develop the silver layer.
The silver layer of this project involved several steps. The first was to correct any missing values, incorrect formats, and the rest of the standard data cleaning processes. After this, we began to look for ways to integrate AI in the process. The first thing we thought about was using AI as a category classifier, which after some testing and adjusting, we were successfully able to implement. The next AI process we implemented was AI summarization, again using each article’s metadata to prompt a short summary of each article. These AI processes were then used to create new columns for each article in the silver layer, with our vision for these processes being used to better filter each article and provide easy to read summaries for our upcoming dashboard.
The final step for this project involved the creation of the gold layer, as the data had been cleaned, processed, and enhanced with AI generated columns. Finally, we created a table for our gold layer data to be put in and connected it to Power BI. The dashboard was developed and edited over time, during which I learned a lot about troubleshooting and researching error messages. After some final changes had been made and verified, we created an automated email that sends the dashboard’s link and information three times a week. We used Microsoft Power Automate to complete this process. The project was completed recently, with consistent maintenance made often to ensure accessibility, readability, and reliability.
This project taught me a lot, in addition to the skills I mentioned earlier, I learned a lot about troubleshooting, handling stress, documentation, and planning. I am very grateful for the great team here at NDUS for the support and collaboration in bring this project to completion.

