RATE LIMIT Issues

In8In8 Member Posts: 5
edited July 11 in Development

Getting 429 all the time When WE are doing Integration with NetSuite. This causes many issues with making sure the Stability of being able to ensure Data, Stock, and items/customers are updated correct, as well as query for information that is needed like find Parent order for returns. We have had to Put 4 sec delays and retry to send/request the same data up to 5 time to help. But this still give possibility for data to never get accepted.


Seems like the server instead of refusing to accept the request should take and queue the request then respond back as soon as it has processed it. This way there is stability and helps to ensure no missing data because the server simple telling you try again later.

This is the only API I have seen that does this. I Do Integrations for Automation with API with Scripts that are not monitored by Human Eyes constantly.


Whom in Lightspeed API team can please contact me about possible updating your Servers to better accept and process requests in a more stable fashion. Otherwise the API solution for Customer will result in Customers leaving Lightspeed For other solutions that do not allow this type of holes to happen.


ALSO, Having the same API data flow for Stock update, customer update, item update, and anything else, that might be running at the same time causes the error to happen even more, instead of allowing the data per records to have their separate request and responses endpoints, to keep things more efficient and stable

Post edited by In8 on

19 comments

  • gregaricangregarican Member Posts: 469 

    The API thresholds and stats are all passed back in the API response headers. If you have control over any custom code that this integration uses then you can inspect those response headers and scale back your request pipeline accordingly.

    What I did for our integration routines is basically time them to the worst-case API limits for GET's as well as the PUT's and POST's. That way I don't even need to worry about inspecting the response headers. Of course that means some pretty sluggish responses but it is what it is :)

  • In8In8 Member Posts: 5
    edited July 13

    This is not a Solution from Lightspeed about how they are doing their API. IF people are using this POS and have large business and need more data updated Quicker, there is no way to. We are limited and subject the downfall for a badly programmed API.

  • gregaricangregarican Member Posts: 469 

    What other POS or e-com solutions have you integrated with that don't have API limits? I'm curious. Out of the others I have (e.g. - Shopify), they all have some API caps. Although Lightspeed's are admittedly stingier than any of the others I've dealt with. For example, each day I export out the current on-hand products, that day's sales, refunds, transfers, returns, etc. via the API. So I can port them into local SQL and get better reporting. That process takes over 3.5 hours for roughly 8,500 items and maybe a couple dozen transactions :(

  • In8In8 Member Posts: 5

    Shopify, Vend, WooCommerce,

    They have Caps, but they seem to have request Queues and dedicated Endpoints on the server access. Lightspeed seems to be very restrictive.


    Shopify and Vend and WooCommerce never give us a 429 or the like and say try again later.

  • In8In8 Member Posts: 5
    edited July 24

    This also makes Syncing very slow and hence a contradiction to the Brand of "Lightspeed" Because I cannot update and create at "Lightspeeds". Should update the name to "SlowSpeed"


    Lol but true

  • gregaricangregarican Member Posts: 469 

    Agreed, if you look at the other platforms their API rate limits are nowhere near as restrictive compared to Lightspeed. And most of the others don't differentiate costs of a GET, PUT, POST, etc. request type. When it comes to Lightspeed Retail's API, I don't even bother with throttling my API requests based on the response headers that come back. I just hard-code my requests to be the worst-case of 1 request/second for GET's and 1 request/10 seconds for anything other request types.

    Obviously this makes scalability an issue. Hence a reason why we couldn't convert our larger company sites to Lightspeed Retail. Just our smaller sites are using it.

  • In8In8 Member Posts: 5

    I really Wish Lightspeed Employees whom do the API development Scrolled the Community. Seems they do not care?

  • bashbash Member Posts: 1

    For us the limits are also no way near sufficient. We sync flower products into lightspeed and these products have a ton of upstream updates in stock availability / new products coming in etc. On top of that it costs an additional API call to set product details for each shop language...

    It causes the same problems as OP describes, you simply cannot be sure that your data is in sync if you have to back off for minutes or even hours to update all your products.

    I wonder if LightSpeed was designed to sell T-shirts and nothing more..

  • thisconnectthisconnect Member Posts: 13

    please let me add to this thread

    i also have a lot of problems with this rate limit.

    when i need to update stock, i first have to get the stock (GET) and then change the stock (PUT). if there would be a call to update (+ or -), that would be one call less.

    very difficult to do batch processing through the API. i also have to throttle my scripts and build my own queue just to do basic things.

    Lightspeed API is much more restrictive than any other API

    and it looks like it never gets updated...

    does any API employee look at this forum? (not some support person who just agrees and says: "we will let the development team know...")

  • gregaricangregarican Member Posts: 469 

    I'd think if the back-end DB resources are virtual/cloud-based then those resources could be scaled up to loosen up the API rate limit. Same with traffic bandwidth.

    To be totally frank and transparent, we have Lightspeed Retail in place for our smaller company. For brick and mortar POS. Then for our larger company we are still evaluating Shopify POS for its B&M operation. Both companies have Shopify integrated for back-end e-commerce. Out of my experiences working with both solution providers --- their hardware, their support, their API, their web-based apps, etc. --- over the course of the past several years I can say that Shopify's solution is definitely a few notches above Lightspeed's.

    Specifically when it comes to the API's there is no comparison. Every 3 months Shopify updates their API with new endpoints and features. You can roll back to a previous version if you encounter a breaking change by simply specifying a different URI. Whereas when it comes to Lightspeed's API, it's been stagnant with its wrinkles and gaps for too long frankly...

  • jtellierjtellier Member Posts: 38

    This rate limit is not what they say it is for sure, we are doing an information update, one PUT every 5 seconds to update manufacturerSku data and after about 20 transactions we get this "Rate Limit" error.


    Also none of those things they say are in the header actually are. I have attached all the return headers from a PUT.


    This rate limit issue is not


  • gregaricangregarican Member Posts: 469 

    They are present in the response headers. Please refer to this document --> https://community.lightspeedhq.com/en/discussion/28/best-practices. The two X-LS-API headers represent the drip rate and the bucket level.

    I had already replied to you other post regarding API rate limits. Unless I am mistaken you can perform 1 GET request per second. But when it comes to PUT and POST requests, they are typically limited to 1 request every 10 seconds. If you have making 1 request every 5 seconds that explains what you're seeing.

  • jtellierjtellier Member Posts: 38

    This rate limit is not what they say it is for sure, we are doing an information update, one PUT every 5 seconds to update manufacturerSku data and after about 20 transactions we get this "Rate Limit" error.


    Also none of those things they say are in the header actually are. I have attached all the return headers from a PUT.


    This rate limit issue is not


  • jtellierjtellier Member Posts: 38

    This rate limit is not what they say it is for sure, we are doing an information update, one PUT every 5 seconds to update manufacturerSku data and after about 20 transactions we get this "Rate Limit" error.


    Also none of those things they say are in the header actually are. I have attached all the return headers from a PUT.


    This rate limit issue is not implemented properly and is clearly a debilitating bug in the system.


  • VintageWineGuyVintageWineGuy Member Posts: 97 ✭

    I won't comment on the best practice nature of their implementation, but it has always at least worked consistently for me. In your screenshot I see everything I use to manage rates.

    X-LS-API-Bucket-Level is showing 10/60 that the max bucket size is 60 and you have currently consumed 10. Max bucket size will change depending on time of day/server load, so you need to check that and split it out into your current bucket_level and current bucket_size.

    LS-API-Drip-Rate is showing the current drip_rate is 1. So your bucket_level will decrease by 1 every second. drip_rate also changes depending on time of day and server load, sometimes it is higher like 1.5 or 2.

    That is all you need to manage the refresh rate. You know that a GET costs 1, and PUT/POST/DELETE costs 10. So in your screen shot you can do ~5 more PUTs which would increase bucket_level to 60 before you start getting a 429. You then need to wait at least 10 seconds if the drip_rate is 1.

    Here is some simple code (python) that I have used to manage the rate:

    api_drip_rate = float(self.response.headers['X-LS-API-Drip-Rate'])

    # Since the bucket level comes back as a fraction, we pull it apart to get the pieces we need

    api_bucket_level, api_bucket_size = [(float(x)) for x in self.response.headers['X-LS-API-Bucket-Level'].split('/')]

    logging.debug(f"MANAGE RATE: Used {api_bucket_level} of {api_bucket_size} , refreshing at {api_drip_rate} and {time.time()-self.expires} sec. left on token.")

    if api_bucket_size < api_bucket_level + 10:

                logging.info(f"MANAGE RATE: Bucket is almost full, taking a break.")

                sleep(10)


    This is a dead simple, unoptimized rate limit handler, but it works. Just grab the bucket_level and bucket_size and when you get too close, take a break. This snippet is out of a manage_rate() function I use anytime I make a call to the API.

    PS: When I manage the rate, I also check the expiration of my token in case it needs a refresh, which will also throw errors in a long job.

  • thisconnectthisconnect Member Posts: 13

    ok, maybe we can check the rate and throttle our calls, but 1 PUT per 10 seconds? come on... never saw that before in any API i used...

    it is almost impossible, as i need to update products (stock) fast after they are added to my solution (stock moves from lightspeed (-1) to my solution (+1))

    waiting 10 seconds to update 1 product's stock could mean waiting hours to update the stock of multiple products. this is stupid...

    why can't we send multiple updates in 1 call?

  • gregaricangregarican Member Posts: 469 
    edited August 13

    @thisconnect I took a peek at your profile and see you aren't an actual Lightspeed Retail customer. More an integration service provider, correct? So you are likely familiar with various POS and e-com platforms. Therefore I can concur with your assertion that the 1 PUT/POST every 10 seconds (with no bulk operations) is woefully inadequate.

    A few other caveats I usually chime in with on here, in case some of your LS Retail customers are more challenging to integrate with.

    1) Any customer who employs a great number of custom product fields over a great deal of products will incur significant performance penalties. Other than via the API these fields are only accessible via the product detail page on the web UI. Using our smaller subsidiary company as an example, with 8,700+ products containing a dozen custom fields it takes operators about 45 seconds to fully commit a new/edited product in the web UI. And that's after hitting the save button.

    2) Just as these custom fields inhibit performance on the front-end, they cause API response delays on the back-end. Say we have a limit of 1 GET request a second. Well if we pull a page of products with load_relations used to pull their custom fields and other related offshoots...this is more like 1 GET response every 10-20 seconds.

    Post edited by gregarican on
  • thisconnectthisconnect Member Posts: 13

    correct, i'm not a retail user. we offer a software solution for baby shops to manage their baby registries. so we connect with a lot of different shops.

    the API throttle is a pain in the ..s

    and bugs in the API seem to never get fixed. (like double EAN numbers or layaways or negative stock or ...)

  • gregaricangregarican Member Posts: 469 

    ...while we are talking about gotchas, there's another one that I just remembered. This might or might not be impactful, depending on how your code is handling API responses.

    Let's say there is a response field that can be an array. Like this:

    "fooArray": [{ "field": "1"}, {"field": "2"}]

    Basically fooArray is a field that is an array. Well, if fooArray happens to just contain a single item for a particular record, the result is passed back as a singleton. Like this:

    "fooArray": {"field": "1"}

    Not all API endpoint responses are handled this way, but some are. I finally had to create a custom JSON response handler as a workaround for this.

Sign In or Register to comment.