As I am using multiple API’s for my work, I noticed that I often get connection errors and timeouts after doing requests. As I would like to avoid that easily, I’d like to create a middleware that would be able to get all requests sent from my project and retry them in case there is an error.
For example, let’s say that I have 2 requests to 2 differents api in different files like this:
- request1.py
request1 = requests.post('url', data)
- request2.py
request2 = requests.get('url', data)
And I would like my middleware to do like
- middleware.py
if request:
try:
request
except Error:
# redo request
And the goal would be that if for example request1 crash, the middleware retry the request until it gets a result and the value would be put in my variable request1, would something like this be possible ?
Thanks in advance for any help !
>Solution :
Yes, it is indeed possible to create a middleware to handle retries for your API requests in case of connection errors or timeouts.
Here’s an example of how you could create the middleware:
import requests
import time
def retry_request(request_func, max_retries=3, retry_delay=1):
retries = 0
while retries < max_retries:
try:
response = request_func()
response.raise_for_status() # Check if the response was successful
return response
except (requests.RequestException, ConnectionError, TimeoutError) as e:
print(f"Request failed: {e}")
retries += 1
time.sleep(retry_delay)
raise Exception("Max retries exceeded. Request failed.")
# Usage example:
# from api_middleware import retry_request
# response = retry_request(lambda: requests.get('url', data))
# Use 'response' variable to access the API response data.
Let me know if is it helpful or not?