What is API Monitoring?
API monitoring is the process of observing the functional, availability, and performance metrics of an API. The API could be an internal, external, or third-party API that is relied on by another application. You want to monitor an API because:
- Your application depends on the services or data provided
- Your API has performance guarantees or SLAs (service level agreements) that it must comply with
There are different ways to monitor an API, but an organization will typically choose an API tool that helps them collect and analyze metrics. The metrics collected are predetermined to help meet benchmarked requirements.
In computing, a benchmark is the act of running a computer program, … or other operations, in order to assess the relative performance of an object, …
Benchmark (computing), Wikipedia
An example of an API metric that is used in API monitoring is latency. Web APIs are accessed over the network. This means that programs have to make an API call to use their services or data. The length of time that an API call takes is a function of factors like location, response size, environment, load, etc. Latency is the amount of time it takes for the process to run and for the application to get a response.
How Does API Monitoring Work?
The best way to explain how API Monitoring works is to describe how a team would plan to implement API monitoring.
First, they would want to prioritize their API routes, functions, processes, or services. Just like with API testing, it’s more difficult to decide what to monitor than to actually monitor.
Then, after prioritizing the important API functionality, the team would decide the metrics to measure their API’s success:
- Is this route available 98% of the time?
- Is the average latency less than 3s?
- Are response sizes less than 1MB?
At this point, it would be appropriate to choose the API monitoring tool. The tool should provide the necessary metrics. Additionally, it would provide methods for creating API requests that accurately represent real API requests (response size, location, etc.). We’ll discuss this further in the following section.
Once an API monitoring tool is chosen, the API will integrate with the tool and generate metrics. The API monitoring tool will allow the ongoing adjustment of requirements and stakeholders will be able to respond quickly to failures in performance, functional requirements, or availability.
Environment
API monitoring is usually conducted in production. That is, we are testing the real-world response time for API functionality. Monitoring an API locally or in staging may yield results that are inaccurate or unrelated to real use cases. That said, monitoring whether tests are failing in development or staging remains critical to the API development process.
Monitoring takes place outside of the applications that may access the API (i.e from a monitoring or testing tool). This keeps the metrics agnostic to the technology stack that is being used to access the API.
Choosing an API Monitoring Tool
As mentioned in the previous section, choosing the right API monitoring tool is important because the tool needs to fit your monitoring plan. Next, we are going to take a look at a few of the features that are important to consider when trying to find the right API monitoring tool to choose from.
Metrics
The most common metrics to measure with APIs is latency. How long does a request take? Latency is only one metric, but it can be sliced in many different ways. Most importantly, when dealing with networks, the location of the client and server will affect the latency. Next, the response size or the request size will affect the latency of the API response. We now have a scenario where we are measuring latency, but increasing our metrics by different combinations of response size and location.
Your API monitoring tool should allow you to mix-and-match different combinations of test factors to see how your API performs under the various circumstances that it will face.
Ease-of-Use
It’s common for technology–especially new technology–to be underutilized. This is true whether it’s software or hardware. Your API tool should match the user’s ability to use the tool. Furthermore, the user interface should be easy to navigate. Most users have an eye for software that understands the importance of a clean user interface. You can usually follow your gut.
Assertions
Assertions are in the same category as metric factors. API requests can be complicated. An important question to ask is, does this tool help us create test scenarios that accurately reflect how our API is used? This will vary by organization and API.
Integration
Integration parallels ease-of-use. Common questions to ask in this category are:
- Does this tool support our API specification version (i.e Swagger 3.x)?
- How much work will it get our API routes configured?
- Does this tool already integrate with our current API software?
Many API tools realize that they should try to stay technology agnostic. However, at the same time, they may have easy paths for organizations to integrate their current API setup.
Alerts
Your API monitoring tool needs to provide alerts. Users cannot be constrained by whether or not they are at their computer watching the metrics unfold. Check to see if the tool provides email alerts, SMS alerts, and/or webhooks.
Additionally, some tools provide CI/CD integrations so even if an API starts to fail in development (or in a staging environment) changes will not take place in the production API.
List of API Monitoring Tools
Now that we have talked about what API monitoring is, and what factors to consider when choosing a tool, let’s briefly introduce some of the more popular API monitoring tools.
RapidAPI
With RapidAPI, you can combine RapidAPI Testing and RapidAPI Teams to monitor, test, and administer your internal APIs, as well as manage access and usage within your team to selected external APIs.
Also, API monitoring with RapidAPI Testing is included as part of the RapidAPI Testing subscription. You can find out more about RapidAPI testing on the product page.
Later in this article, we’ll go through a tutorial on how to set up a test and monitoring system using RapidAPI Testing.
SmartBear (ReadyAPI)
ReadyAPI is a service offered by SmartBear that is broken up into three different service offerings. API testing, performance, and virtualization.
You’ll need to purchase both an API testing and API performance license to get testing and API monitoring.
Uptrends
Uptrends provides, “Website Monitoring, Web Application Monitoring, and API Monitoring.” They provide monitoring as well as other free tools such as a ping test and DNS reporting.
AWS CloudWatch
AWS CloudWatch claims to be, “…a monitoring and observability service built for DevOps engineers, developers, site reliability engineers (SREs), and IT managers.” AWS CloudWatch is part of the Amazon Web Services (AWS) cloud computing services.
CloudWatch pricing can be difficult to understand, and beyond the scope of this article to explain, but their pricing page has examples if you want to explore it further.
Postman Monitoring
Postman Monitoring is a service for monitoring the health of your APIs. You can add Postman monitoring on top of any paid or free plan you already have. Postman monitoring takes into account “monitors” and API calls when determining pricing usage.
API Fortress
Another API monitoring and testing service that can be used in a hosted cloud environment, self-hosted color environment, or hybrid environment.
API Fortress determines pricing by seats and whether you plan to self-host or not. Outside of those factors, pricing information seems to be intentionally hidden.
Browse API Fortress Alternatives.
View the best API monitoring tools here and learn more about API automation tools here.
How to Monitor API Calls with RapidAPI Testing
In this tutorial, we are going to take a look at how to set up API monitoring on RapidAPI Testing. The testing dashboard integrates with all your APIs on RapidAPI, so you can use an existing API if you would like to. If you don’t have any APIs on RapidAPI—don’t worry—the first part of the tutorial will go through adding an API so you can follow along in later sections.
This also means that if you are already familiar with RapidAPI testing, you can skip down to the section on setting up monitoring. You can learn more about RapidAPI Testing (i.e setting up test variables, chaining requests together) in this article on API testing.
1. Sign Up for an Account on RapidAPI
If you don’t have an account on RapidAPI you’ll need to sign up—but don’t worry—it’s free to create an account!
2. Create an API
Download API Spec
This tutorial uses a Swagger API Specification for an example API named Pet Store. You can click the link below to download the JSON file.
Download Pet Store Swagger API Spec
Now that you have the file, we are ready to create our API on the RapidAPI Testing dashboard.
Add New API
Navigate to the testing dashboard at rapidapi.com/testing/dashboard. You may need to sign in to your RapidAPI account. After signing in, select Create API in the top right of the page.
In the pop-up, enter the new API’s information:
- Select the Context
- Enter an appropriate API name (Pet Store)
- Select the API specification file
- Choose OpenAPI for Definition type
Click OK.
The new API should appear on your testing dashboard alongside any existing APIs. Also, it will appear in the account context that you chose when creating the API.
Click on the new API tile to enter the Pet Store API testing dashboard.
Build a Test
On the next page, click Create Test.
Name the test Add Pet.
After you name the test, you’ll be taken to the user interface for building tests. You have the option to build tests manually. However, it’s easier to use the Request Generator.
Inspect Request Generator
At the bottom of the Add User test dashboard. Click on the highlighted Request Generator bar.
After clicking on the Request Generator, a new pane will appear that has three sections.
On the left, you’ll see your endpoints. If you click on an endpoint, the middle pane will populate with the information (and sample data) for that route.
Click the Send button in the middle section next to the route’s URL.
If you get an error, you may need to click the Body tab in the middle section and select “json”.
You should receive a successful response in the right section if your API has valid sample data.
Now that you have received a response, we can add this outcome to our Add Pet test. Click Add to Test.
Select Assertions
A new pop-up will appear asking us what parts of the response we want to add to the test. Here, we toggle properties on-or-off. If a property is on, then it will be added to the test. Furthermore, we can use the dropdown on the right to modify the type of assertion. In the image below, I have selected various assertions that I feel fit the properties best.
Once we finish checking out assertions, click OK at the bottom of the pop-up. Close the Request Generator and observe all the assertions that we have now added, as individual steps, to our test. You can use the drag-and-drop feature to modify the order of test steps. Additionally, you can modify the content of steps by selecting the angle icon on the right of each step.
Now, we are ready to run the test.
You can read an extended version of this tutorial (i.e setting up test variables, chaining requests together) as part of this article on API testing.
Run Test & View Results
At the top right of the Add Pet test dashboard, click Save & Run.
The test runs successfully, and the result should appear in the bottom left of the test dashboard.
Click on the new result to view the Execution Report. The report displays the test conditions and details aspects of individual steps. For example, the apiResponse.data.id
attribute is of type number, but the value is also displayed in the report. Furthermore, you can compare the response schema with your test schema by selecting Show Details for steps.
If a test fails, the value that caused the test to fail is displayed in the Execution Report. This makes it easier to find and fix errors.
Create Environments
Next, let’s create two environments to use in our API monitoring later. To create environments, click on the Settings icon in the left navigation.
In the Test Environments section, click on the + icon to name your new environment. Environments show up as tabs along the top of the Test Environments section.
You define environment variables the same way you define test variables. For this example, you can change the URL for your tests by specifying the protocol and domain.
I specified two variables for each environment.
The domain and protocol for development are examples. You should not expect that domain to exist on your local computer. Back in the individual tests, I can modify the URLs for each request and select the environment I want to use before I execute a test.
With the environment variables, we can share variables across different tests. With test variables, we can scope values that are important to a specific test.
3. Schedule Tests to Monitor API Performance with RapidAPI Testing
Now that we have an API, separate environments, and a test set up, we can schedule tests to run based on:
- Frequency
- Environment
- Location
To schedule a test, click on the Add Pet test. Then, select Schedule in the side navigation.
Next, click on the Add Schedule button.
When we schedule tests we can configure the schedule frequency, environment, and location. These settings can be set per route and per API making it easy to match your monitoring to a route’s priority.
For the first test, select Every 1 Minute, production, and São Paulo.
Click Submit.
Schedule another test with the same frequency and environment, but change the Locations option to Default.
Let the scheduled tests run for a couple of minutes. To turn the tests off, toggle the option on the left of each scheduled test.
4. View Execution Reports
When monitoring APIs we want to make sure that routes can accomplish their functional and performance requirements. We handle the functional promise of our APIs when building the steps to our individual API tests. We address the performance requirements of APIs by asking, how quickly can each route perform the functional promise? This question is answered in the Execution Reports.
To view the execution reports, click on the Results option inside the Add Pet test.
On the Results page, I get a quick look at all of my tests. I have the option to filter the tests by whether they:
- Succeded
- Failed
- Canceled
I can then sort the tests by any of the parameters below:
- Created
- Status
- Environment
- Location
- Trigger
- Successful
- Execution Time
Filter the results by successful tests and sort them by Execution Time. Using this table I can monitor my successful tests and view trends that appear by location or test.
Depending on your location you may see different trends. However, test API requests made from AWS-SA-EAST-1 (São Paulo) tend to take much longer than my default location.
If I wanted to look further into the execution time breakdown, perhaps I have a complicated test, I can click on View Report.
You can view the different test steps, and how long each took, to investigate the cause of a slow execution time. In this report, it’s easy to recognize that the request is responsible for the majority of the total execution time. This can help you monitor performance issues for individual routes.
For higher-level monitoring of your API, RapidAPI Testing has a couple of different tools.
5. Monitor Latency vs. Size
Pinpointing rogue tests, or individual execution reports are helpful for a variety of scenarios. However, variations in requests by route can be difficult to judge. A helpful graph to establish a benchmark for performance vs. response size can be found by clicking on the graph icon in RapidAPI Testing.
The graph on this page looks to average the time each test took given the response size. If you hover over one of the colored circles you can view the latency, route, and amount of requests for each of the data.
Once you have set up tests for your routes, you can get a better idea of the baseline performance for each test. Then, you can monitor your API over time by changing the date range in the graph.
The final step of this API monitor tutorial is to set alerts for failed tests.
6. Set Alerts
So far we took a look at how to schedule tests to run at different times, locations, and in different environments. This allows us to keep an eye on the performance of our API routes using the Results table inside of an API in RapidAPI Testing. We can filter, sort, and then view detailed execution reports that break down the latency of API calls given the test variables.
Furthermore, we looked at an aggregate chart that averages out latency and response size per API route in the main dashboard. The final step in API monitoring is moving quickly to remedy a route that is failing.
To set alerts, click on the Configuration icon in the primary navigation.
At the bottom, you can set email and SMS alerts by providing names and valid contact information. Additionally, you can toggle the added notifications on-or-off given the state of the API.
SMS alerts will send a text message to the number specified detailing the name of the test that failed and the amount of time the test ran for.
Email alerts include a link to the failed test’s execution report.
This allows for fast-monitoring of API issues with easy access to why a route is failing. Combining alerts with the dashboard report and analytics features let us:
- Monitor the health of the API when things are going well
- Respond to failures quickly when things break
Conclusion
In this article, we discussed the basics of API monitoring and why it is important. Also, we looked at API monitoring tools and what you should expect when choosing one to use for your API. Finally, we took the time to walk through setting up API monitoring using RapidAPI Testing and how to use the testing features for general API monitoring and quick response.
API monitoring is a process that involves the functional and performance requirements of an API. This article focused more on monitoring latency and whether a route is failing. To learn about making sure your routes are delivering on their functional promises you can check out another article on API testing.
You can learn more about monitoring API usage with RapidAPI for Teams in the article below. Thanks for reading!
Leave a Reply