RATE LIMIT Issues

In8In8 Member Posts: 8
edited July 2020 in Development

Getting 429 all the time When WE are doing Integration with NetSuite. This causes many issues with making sure the Stability of being able to ensure Data, Stock, and items/customers are updated correct, as well as query for information that is needed like find Parent order for returns. We have had to Put 4 sec delays and retry to send/request the same data up to 5 time to help. But this still give possibility for data to never get accepted.


Seems like the server instead of refusing to accept the request should take and queue the request then respond back as soon as it has processed it. This way there is stability and helps to ensure no missing data because the server simple telling you try again later.

This is the only API I have seen that does this. I Do Integrations for Automation with API with Scripts that are not monitored by Human Eyes constantly.


Whom in Lightspeed API team can please contact me about possible updating your Servers to better accept and process requests in a more stable fashion. Otherwise the API solution for Customer will result in Customers leaving Lightspeed For other solutions that do not allow this type of holes to happen.


ALSO, Having the same API data flow for Stock update, customer update, item update, and anything else, that might be running at the same time causes the error to happen even more, instead of allowing the data per records to have their separate request and responses endpoints, to keep things more efficient and stable

Post edited by In8 on

23 comments

  • gregaricangregarican Member Posts: 683 

    The API thresholds and stats are all passed back in the API response headers. If you have control over any custom code that this integration uses then you can inspect those response headers and scale back your request pipeline accordingly.

    What I did for our integration routines is basically time them to the worst-case API limits for GET's as well as the PUT's and POST's. That way I don't even need to worry about inspecting the response headers. Of course that means some pretty sluggish responses but it is what it is :)

  • In8In8 Member Posts: 8
    edited July 2020

    This is not a Solution from Lightspeed about how they are doing their API. IF people are using this POS and have large business and need more data updated Quicker, there is no way to. We are limited and subject the downfall for a badly programmed API.

  • gregaricangregarican Member Posts: 683 

    What other POS or e-com solutions have you integrated with that don't have API limits? I'm curious. Out of the others I have (e.g. - Shopify), they all have some API caps. Although Lightspeed's are admittedly stingier than any of the others I've dealt with. For example, each day I export out the current on-hand products, that day's sales, refunds, transfers, returns, etc. via the API. So I can port them into local SQL and get better reporting. That process takes over 3.5 hours for roughly 8,500 items and maybe a couple dozen transactions :(

  • In8In8 Member Posts: 8

    Shopify, Vend, WooCommerce,

    They have Caps, but they seem to have request Queues and dedicated Endpoints on the server access. Lightspeed seems to be very restrictive.


    Shopify and Vend and WooCommerce never give us a 429 or the like and say try again later.

  • In8In8 Member Posts: 8
    edited July 2020

    This also makes Syncing very slow and hence a contradiction to the Brand of "Lightspeed" Because I cannot update and create at "Lightspeeds". Should update the name to "SlowSpeed"


    Lol but true

  • gregaricangregarican Member Posts: 683 

    Agreed, if you look at the other platforms their API rate limits are nowhere near as restrictive compared to Lightspeed. And most of the others don't differentiate costs of a GET, PUT, POST, etc. request type. When it comes to Lightspeed Retail's API, I don't even bother with throttling my API requests based on the response headers that come back. I just hard-code my requests to be the worst-case of 1 request/second for GET's and 1 request/10 seconds for anything other request types.

    Obviously this makes scalability an issue. Hence a reason why we couldn't convert our larger company sites to Lightspeed Retail. Just our smaller sites are using it.

  • In8In8 Member Posts: 8

    I really Wish Lightspeed Employees whom do the API development Scrolled the Community. Seems they do not care?

  • bashbash Member Posts: 3

    For us the limits are also no way near sufficient. We sync flower products into lightspeed and these products have a ton of upstream updates in stock availability / new products coming in etc. On top of that it costs an additional API call to set product details for each shop language...

    It causes the same problems as OP describes, you simply cannot be sure that your data is in sync if you have to back off for minutes or even hours to update all your products.

    I wonder if LightSpeed was designed to sell T-shirts and nothing more..

  • thisconnectthisconnect Member Posts: 19

    please let me add to this thread

    i also have a lot of problems with this rate limit.

    when i need to update stock, i first have to get the stock (GET) and then change the stock (PUT). if there would be a call to update (+ or -), that would be one call less.

    very difficult to do batch processing through the API. i also have to throttle my scripts and build my own queue just to do basic things.

    Lightspeed API is much more restrictive than any other API

    and it looks like it never gets updated...

    does any API employee look at this forum? (not some support person who just agrees and says: "we will let the development team know...")

  • gregaricangregarican Member Posts: 683 

    I'd think if the back-end DB resources are virtual/cloud-based then those resources could be scaled up to loosen up the API rate limit. Same with traffic bandwidth.

    To be totally frank and transparent, we have Lightspeed Retail in place for our smaller company. For brick and mortar POS. Then for our larger company we are still evaluating Shopify POS for its B&M operation. Both companies have Shopify integrated for back-end e-commerce. Out of my experiences working with both solution providers --- their hardware, their support, their API, their web-based apps, etc. --- over the course of the past several years I can say that Shopify's solution is definitely a few notches above Lightspeed's.

    Specifically when it comes to the API's there is no comparison. Every 3 months Shopify updates their API with new endpoints and features. You can roll back to a previous version if you encounter a breaking change by simply specifying a different URI. Whereas when it comes to Lightspeed's API, it's been stagnant with its wrinkles and gaps for too long frankly...

  • jtellierjtellier Member Posts: 55
    edited August 2020

    This rate limit is not what they say it is for sure, we are doing an information update, one PUT every 5 seconds to update manufacturerSku data and after about 20 transactions we get this "Rate Limit" error.


    Also none of those things they say are in the header actually are. I have attached all the return headers from a PUT.


    This rate limit issue is not implemented properly and is clearly a debilitating bug in the system.


    Post edited by Adrian Samuel on
  • gregaricangregarican Member Posts: 683 

    They are present in the response headers. Please refer to this document --> https://community.lightspeedhq.com/en/discussion/28/best-practices. The two X-LS-API headers represent the drip rate and the bucket level.

    I had already replied to you other post regarding API rate limits. Unless I am mistaken you can perform 1 GET request per second. But when it comes to PUT and POST requests, they are typically limited to 1 request every 10 seconds. If you have making 1 request every 5 seconds that explains what you're seeing.

  • VintageWineGuyVintageWineGuy Member Posts: 117 ✭

    I won't comment on the best practice nature of their implementation, but it has always at least worked consistently for me. In your screenshot I see everything I use to manage rates.

    X-LS-API-Bucket-Level is showing 10/60 that the max bucket size is 60 and you have currently consumed 10. Max bucket size will change depending on time of day/server load, so you need to check that and split it out into your current bucket_level and current bucket_size.

    LS-API-Drip-Rate is showing the current drip_rate is 1. So your bucket_level will decrease by 1 every second. drip_rate also changes depending on time of day and server load, sometimes it is higher like 1.5 or 2.

    That is all you need to manage the refresh rate. You know that a GET costs 1, and PUT/POST/DELETE costs 10. So in your screen shot you can do ~5 more PUTs which would increase bucket_level to 60 before you start getting a 429. You then need to wait at least 10 seconds if the drip_rate is 1.

    Here is some simple code (python) that I have used to manage the rate:

    api_drip_rate = float(self.response.headers['X-LS-API-Drip-Rate'])

    # Since the bucket level comes back as a fraction, we pull it apart to get the pieces we need

    api_bucket_level, api_bucket_size = [(float(x)) for x in self.response.headers['X-LS-API-Bucket-Level'].split('/')]

    logging.debug(f"MANAGE RATE: Used {api_bucket_level} of {api_bucket_size} , refreshing at {api_drip_rate} and {time.time()-self.expires} sec. left on token.")

    if api_bucket_size < api_bucket_level + 10:

                logging.info(f"MANAGE RATE: Bucket is almost full, taking a break.")

                sleep(10)


    This is a dead simple, unoptimized rate limit handler, but it works. Just grab the bucket_level and bucket_size and when you get too close, take a break. This snippet is out of a manage_rate() function I use anytime I make a call to the API.

    PS: When I manage the rate, I also check the expiration of my token in case it needs a refresh, which will also throw errors in a long job.

  • thisconnectthisconnect Member Posts: 19

    ok, maybe we can check the rate and throttle our calls, but 1 PUT per 10 seconds? come on... never saw that before in any API i used...

    it is almost impossible, as i need to update products (stock) fast after they are added to my solution (stock moves from lightspeed (-1) to my solution (+1))

    waiting 10 seconds to update 1 product's stock could mean waiting hours to update the stock of multiple products. this is stupid...

    why can't we send multiple updates in 1 call?

  • gregaricangregarican Member Posts: 683 
    edited August 2020

    @thisconnect I took a peek at your profile and see you aren't an actual Lightspeed Retail customer. More an integration service provider, correct? So you are likely familiar with various POS and e-com platforms. Therefore I can concur with your assertion that the 1 PUT/POST every 10 seconds (with no bulk operations) is woefully inadequate.

    A few other caveats I usually chime in with on here, in case some of your LS Retail customers are more challenging to integrate with.

    1) Any customer who employs a great number of custom product fields over a great deal of products will incur significant performance penalties. Other than via the API these fields are only accessible via the product detail page on the web UI. Using our smaller subsidiary company as an example, with 8,700+ products containing a dozen custom fields it takes operators about 45 seconds to fully commit a new/edited product in the web UI. And that's after hitting the save button.

    2) Just as these custom fields inhibit performance on the front-end, they cause API response delays on the back-end. Say we have a limit of 1 GET request a second. Well if we pull a page of products with load_relations used to pull their custom fields and other related offshoots...this is more like 1 GET response every 10-20 seconds.

    Post edited by gregarican on
  • thisconnectthisconnect Member Posts: 19

    correct, i'm not a retail user. we offer a software solution for baby shops to manage their baby registries. so we connect with a lot of different shops.

    the API throttle is a pain in the ..s

    and bugs in the API seem to never get fixed. (like double EAN numbers or layaways or negative stock or ...)

  • gregaricangregarican Member Posts: 683 

    ...while we are talking about gotchas, there's another one that I just remembered. This might or might not be impactful, depending on how your code is handling API responses.

    Let's say there is a response field that can be an array. Like this:

    "fooArray": [{ "field": "1"}, {"field": "2"}]

    Basically fooArray is a field that is an array. Well, if fooArray happens to just contain a single item for a particular record, the result is passed back as a singleton. Like this:

    "fooArray": {"field": "1"}

    Not all API endpoint responses are handled this way, but some are. I finally had to create a custom JSON response handler as a workaround for this.

  • In8In8 Member Posts: 8
    edited August 2020

    @Adrian Samuel

    Do you have contacts to help with the API being updated to be made better, cause this sucks .

  • Adrian SamuelAdrian Samuel Moderator, Lightspeed Staff Posts: 654 moderator
    edited August 2020

    Hey everyone,

    I'll give my thoughts on each issue raised here:

    @In8, your use case of having a queue processing of REST API calls isn't industry standard and sounds more akin to a some kind of queue processing middleware that returns a response to a webhook. It's an interesting implementation but not one I've ever come across before. Do you have some documentation for this type of implementation?

    Also please serialise your requests. Concurrent requests to our API on the same API client will cause strange issues.

    @jtellier, One requests every 5 seconds will likely be too fast. If it takes 10 points p/second to make 1 request, it means you can make just over 6 requests a minute at a default rate limit of 1/60. You'll hit your rate limit too quickly that way. @gregarican gives some great advice on how to handle rate limiting, as also can be seen our documentation here:

    https://developers.lightspeedhq.com/retail/introduction/ratelimits/

    @thisisbolo At our current infrastructure setup, our current API limits are what we can tolerate for now. We hope this will change as we update, upgrade and implement new systems for the betterment of all. This has been something that significant effort has been put towards.

    @gregarican That's a classic gotcha that gets us all! This has also been looked at internally. I can't say this is going to change soon but it definitely has merit in addressing.

    The issues you've all brought up here are very valid and understandably there is never anything simple when it comes to managing and improving both infrastructure & software including our public APIs. We hope as we make improvements that you will continue to provide honest feedback that we can champion internally together.

    Post edited by Adrian Samuel on

    Adrian Samuel

    Software Developer

    Lightspeed HQ

  • In8In8 Member Posts: 8
    edited September 2020

    @Adrian Samuel

    We do not use middle ware at all, we direct connection to the LS API from NetSuite. Simply requesting and receiving through API. Queue just means we have a of Records in NetSuite that we are needing to make requests for to update from NetSuite, waiting for reply from API to move forward for: Item Data updates and creation, Stock Updates, Order Data, Customer data etc...

    All Standard data records that need to be created or updated from NS to LS or from LS to NS directly.

    Rate limiting and bottle necking all requests in lightsped, seemingly through the same access point(pipe) generally, causes issues when trying to update more than 1 type of record with LS. Being a customer update from NS to LS, while a stock update is also running.

    Most systems out there outside Lightspeed allow us to update multiple things in the Site without the system telling them they request too much.

    I have direct API implementation with Shopify, Vend, WooCommerce, Zapier, Amazon, and Salesforce for examples and NONE of them have this type of restrictions and issue with multiple requests to different record Endpoints.

    Lightspeed has been the only one I have seen restricting requests so much that it is impossible to do any multitasking at all.

  • gregaricangregarican Member Posts: 683 
    edited September 2020

    @In8 I can tell you based on my hands-on experience that Shopify definitely has API rate limiting. Although not as strict as Lightspeed's --> https://shopify.dev/concepts/about-apis/rate-limits. As for the others, most any API service provider invokes some form of API rate limiting. Here's the first one I pulled up --> https://developer.salesforce.com/docs/atlas.en-us.salesforce_app_limits_cheatsheet.meta/salesforce_app_limits_cheatsheet/salesforce_app_limits_platform_api.htm.

    I assume that your implementations might utilize Zapier as a middleware-type service. If so then you should be able to configure some timing --> https://zapier.com/help/create/customize/add-delays-to-zaps.

    While we don't utilize a third-party API aggregator, in all of my middleware projects I have to implement some form of response header parsing to ensure I'm not hitting any potential API rate limits. Or in the case of Lightspeed, I just automatically time things to the worst-case API limit for the GET, PUT, POST, etc. call so there are no surprises.

    None of this is saying that Lightspeed's API rate limits are reasonable. In my opinion they aren't. If their back-end DB is cloud-based then I have no idea why they can't scale things up so it can take what's been the norm for many years now.

  • flexjolyflexjoly Member Posts: 36

    Coming from eCom, the rate limit in retail is really low!

    In eCom you can do 300 requests in 5 minutes. That is much nicer then calculating in seconds.

    For example we want to fill a matrix with 10 items. It is easy to do that from 1 script. But after about 7 items, LSR says: rate-limit error.... 😵😡

    FEATURE -REQUEST: make rate-limit like in eCom: max per few minutes.

    Or: make it possible update/create in bulk for items, matrix etc. So that we can sent an array of items, matrix....


    Greetz, flexjoly

  • In8In8 Member Posts: 8

    Thanks all for the Input. We have Bucket headers being reviewed. The only issue Lightspeed as is this is a Global Rate Limiting all request to different end points so doing Side by Side request together they get in the way of each other. Would be great if Lightspeed would update the End point request to each have their own Pipe for requests.

Sign In or Register to comment.