Picture this: It’s December, and you’re staring at a pile of 20+ invoice PDFs scattered across your desktop. Sound familiar?
We’ve all been there. The holiday season is approaching, you’re dreaming of that well-deserved break, but first – the dreaded year-end expense reports. You know the drill:
Open each invoice PDF file
Copy and paste dates, amounts, and ticket numbers (hoping you don’t mix them up)
Navigate to that expense portal that takes forever to load
Fill out the same form fields over and over again
Upload attachments one by one
Click submit and pray nothing went wrong
Rinse and repeat… 20 more times
If you’ve ever done this mind-numbing dance, you’re not alone. It’s a boring, repetitive task that eats up hours and can easily introduce errors.
But what if there were a better way?
I built an AI-powered solution that automates the entire workflow—no APIs to integrate, no complex system changes. It works with any web-based expense system by extracting data from your PDFs and filling out forms directly in your browser, just like you would… minus the headache.
In the era of AI, integrating agents with external systems is crucial for expanding their capabilities. Whether accessing local file systems, remote databases, or APIs, external data significantly enhances an AI agent’s ability to assist in daily tasks.
However, connecting AI agents to external systems can be challenging. Different systems use various interfaces and protocols, making integration complex. Fortunately, Anthropic’s Model Context Protocol (MCP) simplifies this process, providing a standardized way for AI agents to interact with external resources and tools.
MCP functions similarly to HTTP on the internet—it defines a standardized approach that enables AI agents to discover and interact with external systems seamlessly.
Building an MCP Server for Tibber
In this blog post, I’ll demonstrate how I built a MCP server for integrating Tibber with an AI agent.
Tibber is a Norwegian power supplier that we’ve been using for years. They provide APIs that allow users to retrieve electricity prices and energy consumption data. By integrating Tibber’s API with an AI agent, I can ask natural language questions like:
“What is the current electricity price?”
“What was my house’s energy cost yesterday?”
“List the 3 cheapest hours of tomorrow.”
This integration enables a more intuitive and efficient way to access real-time energy data without manually querying APIs.
Follow the installation steps in the repository to deploy Tibber MCP on your local machine. Once installed, configure the MCP server settings as follows:
Once it is successfully connected to the MCP server, you can see the available tools (and resources) this MCP server provides. This is the screenshot of Roo Code MCP servers in VS Code.
Sample questions and answers
Question #1: “What is the current electricity price?” Answer:
1 2 3 4 5 6 7 8 9 10 11 12
The current electricity price (as of 16:00) is 0.6955 NOK/kWh with a NORMAL price level. This price consists of: - Energy cost: 0.5328 NOK/kWh - Tax: 0.1627 NOK/kWh
This is a moderate price compared to today's range: - Lowest price today: 0.4924 NOK/kWh (at 23:00) - Highest price today: 0.732 NOK/kWh (at 18:00)
Tomorrow's prices will be generally lower, with some very cheap periods: - Lowest price: 0.1673 NOK/kWh (at 01:00) - Highest price: 0.7472 NOK/kWh (at 18:00)
Question #2: “What was my house’s energy cost yesterday?” Answer:
1 2 3 4 5 6 7 8 9 10 11 12 13
Your total electricity cost for yesterday (March 27th) was: - Total consumption: 63.811 kWh - Total cost: 45.31 NOK
The most expensive hours were: - 08:00-09:00: 0.7795 NOK/kWh - 07:00-08:00: 0.7736 NOK/kWh - 06:00-07:00: 0.7424 NOK/kWh
When reading the source code, you may have noticed how easy it is to build an MCP server using tools like FastMCP. With fewer than 100 lines of code, it efficiently wraps the pyTibber library into an MCP server implementation, enabling the AI agent to interact with it automatically.
Currently, this MCP server implementation does not support parameters such as date or time when making API calls. For example, when you ask, “What was my energy consumption yesterday?”, it retrieves all consumption data for the past 30 days and relies on the LLM (Large Language Model) to extract the relevant information.
To improve efficiency and reduce token usage, we could implement parameterized API calls. This would allow the AI agent to pass a specific date (e.g., yesterday) and retrieve only the necessary data, making the interaction more precise and resource-efficient.
Feedback
If you have any questions or suggestions, feel free to reach out or create a GitHub issue for the repository: https://github.com/linkcd/tibber-mcp.
Imagine a world where AI agents aren’t working together to achieve a common goal. Instead, each agent is out to win the game of Rock-Paper-Scissors. The mission of each agent is straightforward: defeat the others.
Can a machine strategize in a game of pure chance? And if it can, which model will emerge victorious?
In order to answer that very question, I built a multi-agent system to host fully automated Rock-Paper-Scissors tournaments, pitting various AI models against one another to see who comes out on top. From OpenAI’s cutting-edge models to Meta’s Llama and Anthropic’s Claude, each agent brings its own “personality” and decision-making quirks to the table.
This isn’t just an experiment in gaming; it’s also a showcase of the latest capabilities in multi-agent systems. Using CrewAI and LangGraph, it is easy to create AI agents and put them into complicated flows.
The graph and crew definition can be found in the src folder in the source code github repo.
Workflow:
In each round, two player agents make their moves independently and in parallel. They have access to the history of previous rounds, allowing them to analyze patterns and decide on the best move.
After the players make their moves, a judge agent determines the winner of the round.
The system checks if the criteria for determining the final winner have been met (e.g., reaching the specified number of rounds, or a player winning 3 out of 5 rounds.).
Criteria Not Met: If the criteria are not met, another round begins.
If the criteria are met: The final winner is announced, and a post-game analysis is performed.
After running hundreds of matches, the results were nothing short of interesting – and sometimes hilarious. Let’s look at what we discovered.
Nowadays, it is common for companies to operate in multi-cloud environments, such as Azure and AWS. They often use Microsoft Entra ID (formerly Azure Active Directory) as their centralized identity provider (IdP), managing identities for both human users and applications. They would like to use the Entra ID identities to access resources in AWS.
Establishing human user identity access across Azure and AWS is straightforward. The IT department can use AWS IAM Identity Center to allow users from Microsoft Entra ID to sign-in to the AWS Management Console with Single Sign-On (SSO) via their browser. This integration simplifies authentication, offering a seamless and secure user experience across both Azure and AWS environments. For more information, you can read this document.
However, the browser-based SSO approach for human users does not apply to applications.
For applications, developers follow security best practices by using cloud-native IAM (Identity and Access Management) mechanisms to manage resource access. In AWS, this mechanism is AWS IAM, while in Azure, it is typically Azure Managed Identity. For example, by leveraging Azure Managed Identity, developers can build applications in Azure without the need to manage secrets or keys.
This approach is known as secretless access to cloud resources.
AWS IAM and Azure Managed Identity work well within their respective platforms, but there are cross-cloud scenarios where a workload in one cloud needs to access resources in another. For instance, an Azure Function might need to save data to both an Azure Storage account and an AWS S3 bucket for cross-cloud backup. The Azure Function uses Managed Identity to access the Azure Storage account. For accessing S3, the developer could create an IAM user and store the IAM user credentials. However, there is a better way to achieve secretless access to both Azure and AWS resources using the same Azure Managed Identity.
We will build an Azure Function with a managed identity, either User-Assigned Managed Identity (UAMI) or System-Assigned Managed Identity (SAMI), to read objects from both an Azure Storage account and an AWS S3 bucket. This same managed identity will work in both Azure and AWS, eliminating the need to manage additional secrets such as AWS IAM user credentials.
“Foreigners and expats living outside of their home country deal with a large number of emails in various languages daily. They often find themselves struggling with language barriers when it comes to setting up reminders for events like business gatherings and customer meetings. To solve this problem, this post shows you how to apply AWS services such as Amazon Bedrock, AWS Step Functions, and Amazon Simple Email Service (Amazon SES) to build a fully-automated multilingual calendar artificial intelligence (AI) assistant. It understands the incoming messages, translates them to the preferred language, and automatically sets up calendar reminders.”
Discover my latest blog post on AWS official blog channel, where I delve into managing IoT devices from anywhere! Whether you’re interested in a humble Raspberry Pi application or eager to explore broader applications like home automation or industrial IoT solutions, this post has got you started.
This is a demo solution that is using AWS Step Functions and ECS Anywhere to complete a simple data processing task by using cloud orchestration (Step Functions) and local computing resources (a NanoPi).
Data flow
User upload a file to a s3 bucket
S3 triggers step functions via cloudtrail and event bridge
Event bridge triggers a step function state machine
State machine triggers a ECS Anywhere task to download the file from s3 to local (to do some processing), if file name matches condition
Architecture
NanoPi that runs ECS Anywhere
NanoPi Neo2 with LED hat in my home office, running AWS ECS Anywhere.
AWS Systems Manager (SSM) is an AWS service that you can use to view and control your infrastructure on AWS. It can securely connect to a managed node. The SSM Agent is installed in EC2 OS. It is pre-installed on many amazon Machine Images (AMIs).
With SSM:
No need to open SSH port in security group for EC2
No need to create and manage SSH keys
And SSM works regardless if the EC2 instance is in public or private (NAT or Endpoint) subnet.
Requirements for SSM working:
AWS instances:
SSM agent installed in instance (pre-installed in many AMIs already)
Connectivity to the AWS public zone endpoint of SSM (IGW, NAT or VPCE)
IAM role providing permissions
On-Prem instances:
SSM agent installed in instance
Connectivity to the AWS public zone endpoint of SSM (Access to public internet)
Activation (Activation Code and Actuation ID)
IAM role providing permissions
2. EC2 Instance in public subnet
2.1. Make sure the EC2 instance has a public IP. It could be the public IP assigned during creation, or an Elastic IP.
2.2. EC2 instance should have Internet access (for calling SSM endpoint). In public subnet it is done via Internet Gateways. See details from Session Manager prerequisites, in “Connectivity to endpoints” section.
2.3. You can use VPC Reachability Analyzer to troubleshoot the connectivity between your EC2 and Internet gateway.
2.5 Attach the EC2 Instance profile to your instance.
2.6 Reboot the EC2 instances.
3. EC2 instance in private subnet, with NAT connectivity
In this case, EC2 instances have no public IP, but they can still talk to internet via NAT.
3.1. Make sure EC2 instances in private subnet can access internet, via a NAT Gateway or NAT instance.
3.2. The rest will be the same as EC2 instances in public subnet, starting from 2.2
4. EC2 instance in private subnet, without NAT connectivity but VPC endpoints
In this case, the EC2 instance (no public IP) won´t have access internet via NAT but VPC endpoints, some extra works are required
4.1 Create VPC endpoints for System Manager. Remember to allow HTTPS (port 443) outbound traffic in security group for your endpoint (ssm, ssmmessages and ec2messages)
4.3 Attach this instance profile to your EC2 instance
4.4 Make sure enable “DNS resolution” and “DNS hostnames” for you VPC
4.5 In addition, if your EC2 instance need to access other AWS services such as S3, remember to create needed endpoints for them as well. (For S3 you can choose either Gateway or Endpoint. At this moment Gateway is free.) Note that you need to add the endpoint into the private subnet route table. The following screenshot shows the route table entity of a S3 Gateway endpoint, which is using prefix lists.
5. Verification
Once the SSM is fully up-and-running, the EC2 instance (either in public/private subnet) will appear in Fleet Manager in SSM web console.
This is a happy path demo of setting up Okta as the Idp for AWS Control Tower (via AWS SSO). Goal: To utilize users and groups in Okta to manage AWS control tower.
1. Create a brand new Control Tower instance
In this demo, we create the AWS Control Tower instance in a brand new AWS account. During this process, control tower creates several services/components, such as AWS Organizations, AWS SSO, default organizations unit (OU) “Security” and 2 AWS accounts “Log Archive” and “Audit”.
In the AWS SSO, some default SSO user groups are created for managing Control Tower:
The default admin user for organization management account is “AWS Control Tower Admin”.
Detailed user info
And it belongs to 2 groups: AWSAccountFactory and AWSControlTowerAdmins