increase the API rate limit

Ankit GeeksAnkit Geeks Posts: 4Member
edited September 10 in Development

Hi Support,

Please let me know how to increase the API rate limit.

Thanks

Ankit

19 comments

  • gregaricangregarican Posts: 137Member ✭
    I'll let LS support speak to this, but I will tell you that the rate limit is a more of a dynamic sliding scale. Depending on what time of day the API is hit the limits can improve or decline. All of the particulars will be present in the API response headers.

    The killer for us is that the API pushes are governed 10 times more than the pulls. Pull one record a second (which is relatively stingy) versus push one record every 10 seconds, for example. Of course the goal is to not have any outside parties negatively impacting response time for the LS Retail web client, so I get it in the big picture :smiley:
  • Ankit GeeksAnkit Geeks Posts: 4Member
    Payload for create Saleline.

    {
          "employeeID": "21",
          "itemID": "26308",
          "shopID": "2",
          "saleID": "21617",
          "createTime": "2018-09-19",
          "unitQuantity": "1"
         }
  • Alex LugoAlex Lugo Posts: 60Administrator, Lightspeed Staff moderator
    Hello @Ankit Geeks,

    As  @gregarican said, the rate limit changes on a schedule. You can avoid 429's by looking at your bucket size and adjust your request speed accordingly.

    https://developers.lightspeedhq.com/retail/introduction/ratelimits/
    Alex Lugo
    API Support Specialist
    Lightspeed HQ
  • Ankit GeeksAnkit Geeks Posts: 4Member
    edited September 21
    The main issue has been when I accessed my custom app from within LS. It's taking longer to load.
  • gregaricangregarican Posts: 137Member ✭
    The only incremental response times I've noticed in LS Retail web client involve adding more inventory with lots more custom field values defined. For example, when adding/editing a product the post-save completion can take about 40 seconds with around 4,000 SKU's. If we didn't employ so many custom fields the UX would be better.

    As for the LS Retail API it's a similar story. We timeout after 30 seconds trying to pull a 100-item page of products. If I cut that down to around an 80-item page of products we can pull it in around 20 seconds. Again, the plethora of custom fields we've employed is notably decreasing performance.

    Not sure if this helps, but when you are referring to LS Retail taking longer to load this came to mind for us at least!
  • Alex LugoAlex Lugo Posts: 60Administrator, Lightspeed Staff moderator
    Hi @Ankit Geeks,

    Could you please  explain to us what you mean by accessing your app from LightSpeed?
    Alex Lugo
    API Support Specialist
    Lightspeed HQ
  • doodiestarkdoodiestark Posts: 8Member
    So from within a sale you click the Loyalty enroll/view button: http://www.doodiestark.co.uk/wp-content/uploads/2018/09/DS_Loyalty_SalesScreen.jpg it takes you out of the sales screen and in to the loyalty page on our server - in exactly the same way it did with Thirdshelf.
  • doodiestarkdoodiestark Posts: 8Member
  • danipdanip Posts: 9Member
    @Alex Lugo
    We are checking bucket size and drip rate after every call and put a wait script in place to try to mitigate a 429 issue but we're still hitting a brick wall. We have several scripts running simultaneously. One particular script updates over 500 records in LS. If we have to wait 2 or 3 or even 5 seconds per update, our script takes an hour to run. When in fact it could be done in 10 minutes without rate limiting. How do you expect us to be happy with the performance? Are there any alternatives with unlimited access such as whitelisted IP? Double or Triple client bucket size? Thanks for your help.
  • Adrian SamuelAdrian Samuel Posts: 199Moderator, Lightspeed Staff moderator
    @danip I understand your frustration, but the drip rates are designed to maintain server integrity and thus we can't make exceptions in that.
  • danipdanip Posts: 9Member
    @Adrian Samuel
    Hm.. server integrity vs speed, functionality, usability, production and customer satisfaction? Since we're only able to update a measly 240 records per minute, a bulk update feature would be nice. Why not let us send you an array and process it asynchronously? I'll keep on dreaming.. 
  • doodiestarkdoodiestark Posts: 8Member
    Whitelisted IP sounds like a great idea as we're only talking about two fixed premises.  Is this possible?  It always seems to take longer when you've got a queue of people waiting to pay!
  • doodiestarkdoodiestark Posts: 8Member
    What I'm trying to understand is why it definitely loaded quicker when linking to Thirdshelf than it is in our new loyalty platform. LS can you do some testing on the two setups to see what the difference is?
  • gregaricangregarican Posts: 137Member ✭
    edited September 27
    danip said:
    @Alex Lugo
    We are checking bucket size and drip rate after every call and put a wait script in place to try to mitigate a 429 issue but we're still hitting a brick wall. We have several scripts running simultaneously. One particular script updates over 500 records in LS. If we have to wait 2 or 3 or even 5 seconds per update, our script takes an hour to run. When in fact it could be done in 10 minutes without rate limiting. How do you expect us to be happy with the performance? Are there any alternatives with unlimited access such as whitelisted IP? Double or Triple client bucket size? Thanks for your help.
    If you are seeing record pushes every 2-5 seconds then that's a blessing. Depending on the time of day and whatnot, I thought record pushes could experience drip rates of 1 call every 10 seconds. Importing in a large product repo with images took us 3 full days.

    What would be nice is if the API client could push record collections in a single API call. For example, the JSON body could consist of a maximum of 100 records in a single API call. That would provide optimal throughput of 10 records/second. Similar to the paginated nature of API record pulls.
  • NiranlohNiranloh Posts: 1Member
    ...

    What would be nice is if the API client could push record collections in a single API call. For example, the JSON body could consist of a maximum of 100 records in a single API call. That would provide optimal throughput of 10 records/second. Similar to the paginated nature of API record pulls.
    Respecting the fact, of course, that standard user experience is paramount, and that the LS team has put an enormous amount of work into the customer solution and the API. . . I second this concept. I have worked with other API's where this sort of paradigm is used and it allows for us to push fewer more dense requests.  This seems to make it less stressful on the API and more usable for the developer.  If need be, the consumption of data can be passed to and managed by a different system after being accepted by the API.  The client would simply need some sort of progress token if the want to check how the data is being processed.  I suppose easier said than done.

    All-in-all, ~6 POST/DELETE/PUT per minute makes it very impractical to keep even modest datasets synchronized, and I would love to see a better option.
  • oimagineoimagine Posts: 2Member
    Hello, How to check if our API s account rate limit has been reduced? I ve read that https://developers.lightspeedhq.com/retail/introduction/ratelimits/
    but it doesn't tell me how to check if there any changes as we were often having 429 error lately.
    Thanks
  • gregaricangregarican Posts: 137Member ✭
    oimagine said:
    Hello, How to check if our API s account rate limit has been reduced? I ve read that https://developers.lightspeedhq.com/retail/introduction/ratelimits/
    but it doesn't tell me how to check if there any changes as we were often having 429 error lately.
    Thanks
    This limit fluctuates throughout the day. It can he higher during off-hours and lower during the "busier" times of day. Reading the header values can display what's currently being observed.

    For example, header values like this:

    X-LS-API-Bucket-Level: 10/60
    X-LS-API-Drip-Rate: 1

  • oimagineoimagine Posts: 2Member
    edited December 6
    oimagine said:
    Thanks, however i don't even know how or where to read the header values you mentionned. Please let me know how.
    Thanks a lot.
    oimagine said:
    Hello, How to check if our API s account rate limit has been reduced? I ve read that https://developers.lightspeedhq.com/retail/introduction/ratelimits/
    but it doesn't tell me how to check if there any changes as we were often having 429 error lately.
    Thanks
    This limit fluctuates throughout the day. It can he higher during off-hours and lower during the "busier" times of day. Reading the header values can display what's currently being observed.

    For example, header values like this:

    X-LS-API-Bucket-Level: 10/60
    X-LS-API-Drip-Rate: 1



    Thanks, however i don't even know how or where to read the header values you mentionned.
    Thanks a lot.
    oimagine said:
    Hello, How to check if our API s account rate limit has been reduced? I ve read that https://developers.lightspeedhq.com/retail/introduction/ratelimits/
    but it doesn't tell me how to check if there any changes as we were often having 429 error lately.
    Thanks
    This limit fluctuates throughout the day. It can he higher during off-hours and lower during the "busier" times of day. Reading the header values can display what's currently being observed.

    For example, header values like this:

    X-LS-API-Bucket-Level: 10/60
    X-LS-API-Drip-Rate: 1


  • gregaricangregarican Posts: 137Member ✭
    I'm not sure what development platform or language you're using, but if you are handling the web response from the API endpoint you should be able to access the headers passed back in the HTTP response. 

    Here below is a simple example I ran from command-line CURL. Note the headers listed in the beginning of the HTTP response. That's where you would find the API call limit details when you are hitting a LS Retail API endpoint.

    Like I said, I don't know what platform or language you are using for your project, but even using CURL you could employ basic regular expressions to parse the headers and determine how close you are to hitting an HTTP 429 error.

    Hope this helps some?



    HTTP/1.1 200 OK
    Accept-Ranges: bytes
    Cache-Control: max-age=604800
    Content-Type: text/html; charset=UTF-8
    Date: Fri, 07 Dec 2018 13:24:39 GMT
    Etag: "1541025663+ident"
    Expires: Fri, 14 Dec 2018 13:24:39 GMT
    Last-Modified: Fri, 09 Aug 2013 23:54:35 GMT
    Server: ECS (ord/572F)
    X-Cache: HIT
    Content-Length: 1270

    <!doctype html>
    <html>
    <head>
        <title>Example Domain</title>

        <meta charset="utf-8" />
        <meta http-equiv="Content-type" content="text/html; charset=utf-8" />
        <meta name="viewport" content="width=device-width, initial-scale=1" />
        <style type="text/css">
        body {
            background-color: #f0f0f2;
            margin: 0;
            padding: 0;
            font-family: "Open Sans", "Helvetica Neue", Helvetica, Arial, sans-serif
    ;

        }
        div {
            width: 600px;
            margin: 5em auto;
            padding: 50px;
            background-color: #fff;
            border-radius: 1em;
        }
        a:link, a:visited {
            color: #38488f;
            text-decoration: none;
        }
        @media (max-width: 700px) {
            body {
                background-color: #fff;
            }
            div {
                width: auto;
                margin: 0 auto;
                border-radius: 0;
                padding: 1em;
            }
        }
        </style>
    </head>

    <body>
    <div>
        <h1>Example Domain</h1>
        <p>This domain is established to be used for illustrative examples in docume
    nts. You may use this
        domain in examples without prior coordination or asking for permission.</p>
        <p><a href="http://www.iana.org/domains/example">More information...</a></p>

    </div>
    </body>
    </html>





Sign In or Register to comment.