Every single file I open the first thing I have to do is a 'Find, Replace' - why are all of the monetary figures exported with the following "¬£" (copied and pasted straight from Excel as it is constantly in my 'Find' box)? I'd rather have no denomination and rely on the column heading. Oh - and column headings that start with certain characters like "=" also do not play nicely in Excel.

If there's a setting somewhere I missed please let me know so I can save a little bit of time!

]]>]]>

Can someone please point me as to where I should go to assist my CFO?

]]>

Sometimes we can build in-table summaries of the results that Analytics gives us.

Let's say that we want to summarize results by

Let's say that we started with the First Subcategory report that we customized earlier, where we start with the complete category, but then remove everything following the subcategory

We could prepare data to gather totals by First Subcategory, but we would need Calculations to do this, as we cannot use a calculated field as a dimension.

These are the logical steps we need to make this happen:

- Sort the results by First Subcategory

- Determine which is the last instance of each First Subcategory

- Count How many rows in which the same First Subcategory can be found

- Gather the results from each collection of First Subcategories

- Gather the First Subcategories and their respective results.

The crux calculation that we are preparing is Offset List, one of the tools from our Table Functions.

Ready to start? Cool

Sort by First Subcategory

This should be easy enough, we can do this by just clicking on the "Item Category" label, and this should sort everything accordingly

So far so good

Is this First Subcategory the same as the next one?

This will help us identify the

Let's start now using a simple Offset function. I'm going to use Offset to look at the "First Subcategory" field

and to ask it to look one row down...

so when I save, to the right of each First Subcategory, we should see the next First Subcategory...

Then, what we want to see if they are the same. We can do this with one more simple calculation: =

If the values are equal, Analytics will return a Yes, if not, Analytics will return a No.

So we'll start with brackets...

...look for the "First Subcategory" calculation...

...copy the offset function looking at the row below...

...and then paste it into the second set of brackets...

...so that when we save...

We get a Yes/No calculation where the "No" identifies the final instance of every First Subcategory.

Let's save our work so far as new...

How many rows are the same?

So now we know the last instance of each First Subcategory, but we also will need a count of each one.

A way to start this involves the Match function. Match returns the first row on which a value occurs.

What we will do is match the First Subcategory...

...to the first subcategory...

...so when I save, we will get a Calculation repeating the first row on which each First Subcategory occurred...

Can you guess now how we can count the number of rows each First Category occupies?

Well, now we'll start a Calculation where we return the row()...

...and then subtract...

..the matched reference...

...so that our calculation will look like...

...upon saving...

...we will get a 0 for each First instance of a First Subcategory, and a subsequent count on each row of the subcategory.

Now let's save our changes...

Create a first and last reference point

The Offset List function we're going to use will need three arguments,

- the values (which will be our Sales Line totals)

- our start point for counting (or, how far back we want to start)

- the end point for counting (or, how many rows we wish to include)

Our values are already on the table.

Our start point now will be the negative row count for each last instance of a First Subcategory. So this becomes a pretty simple IF statement.

I'm going to start with my brackets...

...then I'm going to copy the calculation looking at the reference being the same or not...

...and paste, then grab the second function looking at the Count, copying...

...and pasting...

...now when we save, for each "No", we should see the negative total, for each "Yes" we should see zero...

Now, we want to define the end point. So this will need to be the row count plus one. Why you say? Because each count starts with zero.

We can copy+paste the same basic function above...

but substitute the negative sign...

...with a positive 1...

...so upon saving...

We get yet another calculation with the opposite inverse number plus one...

Let's save our work...

Gather the results

In case you were wondering. This is where the fun starts.

Now we get to roll out the offset_list function that we mentioned above.

First, what value are we looking at? I'm going to reference the sales totals...

...starting with our "Back" Calculation...

...ending with our "Forward" Calculation...

...so our final calculation looks like...

...and when we save...

wayyyyy over to the right, we'll see a list of all the Sale Line totals. Each list should only be displayed on the last instance of each First Subcategory, and should reference only its respective totals...

Now, one more step, let's create a sum...

...of that group total...

...so that when we save, where applicable, we get one total per First Subcategory...

Now let's save our work to this step...

A bit of cleanup

So there are some supportive calculations here that we shouldn't need on this report. We are using First Subcategory, Back, Forward, and Group Total, we're also going to use "Same" at least once or twice more. Everything else we should be good to drop...

So our lighter report to this point should look a bit like...

Look for the Last Results

So now, we want to prime our report for gathering the meaningful data.

To start, I'm going to ask Analytics to return a 1 for every instance where "Same" is No, and return 0 otherwise.

So we'll start our calculation with brackets...

and then copy+paste our calculation checking whether the First Subcategory is the same as the one below...

When I save this, we should see a 0 for each Yes, and a 1 for each No

Now what I want to do, is have these ones rise in sequence. How do we do that? Let's use our running_total function to return that number...

copying...

...pasting...

...and saving...

now the next fun part, we are going to match the row of the report...

...to this rising number...

...pasting...

...so that when we save...

Each row will return where the next First Subcategory can be found

See where we're going with this yet? We're almost there.

Let's save.

Gather Labels and Totals

Now, to complete the preparation of our data, we're going to use the Index function...

To look at the First Subcategory labels that we created...

...then from them, gather the indexed number...

copying, pasting...

...and saving...

creating a singular entry for all our First Subcategories...

finally, let's do the same for the grouped totals we created in Step Five...

so that when we save...

We get...

well, we get a messy table of unclear numbers.

We're almost there. I promise.

Hide the Raw Data

If we open the Table Visualization now, we should see all the starter data...

However, only two of these columns are now of any interest to us...

So let's start hiding everything...

...except...

...our labels...

...and the grouped totals...

So now, when we look at the Visualization...

We get a clean list of all the summary totals by First Subcategory!

Gosh, if you made it this far, you're awesome. Treat yourself to something nice.

Calculations Home

]]>

I noticed there is a tip referencing encoding of reserved characters, I have try the following urls and get an 'Invalid Argument' error for this url:

`https://api.lightspeedapp.com/API/Account/accountID/Sale.json?load_relations=%5B"SaleLines","SaleLines.item"%5D&shopID=3`

and the same for this url:`https://api.lightspeedapp.com/API/Account/accountID/Sale.json?load_relations=["SaleLines","SaleLines.item"]&shopID=3`

At the end of the day I need to grab sales data that includes:

Hours worked by D/W/M

Average Sales D/W/M

Units Per Sale D/W/M

Total Sales D/W/M

Store Total By D/W/MTD

Average Sales D/W/MTD

Units Per Sale D/W/MTD

Sales By Hour MTD/YTD

Sales By Day MTD/YTD

Store Margin D/M/YTD

Top 10 Products MTD/YTD

Sometimes we have received requests for Analytics to report on what payment methods were used on specific Product Categories or specific Products. This type of analysis is not

A Customer purchased a suit for $100, and a hat for $50. The customer's total was $150. The customer gave the store $60 in Cash, and then paid the remaining $90 on Credit Card.

If there are multiple categories

However, there are a few easier scenarios:

- If one product was purchased and one payment method was used.
- If products in five categories were purchased and
*one*payment method was used. - If multiple products in
*one*category were purchased and multiple payment methods were used.

These scenarios are all easy to account for as there is either parity between payment methods and

categories, or an even distribution of one

Because not every sale will be one of the easier scenarios, Analytics cannot

But we

This will involve coming up with a logical rule for distributing values from multiple payments across multiple categories.

Fortunately, there is a simple one that we can start with.

We could take the total funds and determine the percentage that each method contributed to the total.

For example.

In the above model, the payment methods totalled $150.

The Cash made up 60 of 150 dollars,

The Credit Card made up 90 of the 150 dollars,

So we

This way, we

$20 Cash for the Hat

$30 Credit Card for the Hat

$40 Cash for the Suit

$60 Credit Card for the Suit

Is it definitely what happened? No. But it works, and is enough for us to start building a simple working model.

In addition to this, we can use Analytics to start preparing our data.

To determine how much each payment contributed to each invoice, let's start by opening up the Sale Payments report

The default dimension is by Sale Date, so let's start by adding "Sale ID" as a dimension...

...and let's also drop the dimension of Sale Date, this will be of no value in our analysis...

...also, the default report is looking at the previous 1 complete month. Just so that our report isn't enormous, let's open the filters and switch this up to being the previous 1 complete week...

Now, we're not actually interested in the payment

So, we could add a calculation to divide the payment amount...

...by the row total of the payment amount...

and saving...

This will give us a calculation that mostly returns the number 1...

but we may find some instances where the payment is broken up across columns...

Let's go back to our calculation, and perhaps define it as being a percentage with four decimals...

...I'm also giving it a title of "Distribution"

And let's save our working Payments Report...

Finally, we don't need to see the value of payments on the report here, so let's hide the amount column...

So we have a visualized report that shows us by Sale ID only the percentage of payment that each method provided...

Let's save again as new...

...and then download our report as an Excel file...

Part Four: Defining Categories per Invoice

So now we can see the payment methods on each Invoice, we can run a similar Sales report to show us Top-Level Categories by Invoice as well. To start, let's open up the Recent Sales report...

And let's add a few dimensions, first, the Item Top Level Category...

...and then the Sale ID...

...and we can also get rid of the "Sale Date" field...

Also, let's make sure our date filter matches: Sales from the Past 1 complete week...

So when we run this report, we can see some sales have multiple Top-Level Categories on them...

So this is ready now, let's Save this with a custom title...

and again, we can download it as an Excel file...

So now, we can open our "Payments" and "Categories" reports in Excel...

Our Payments Report lists each Method per Invoice as a Column...

and our Categories report lists each Category per Invoice as its own line...

First, we want all the data on one sheet, so I'm going to copy the columns from the "Category" report...

...then paste them all on the "Payments" report to the right of all our Payment data...

...Sometimes adding a bit of colour makes things easier...

...So now, we want to determine how much of each Category's total was assigned to a certain payment method...

To do this, we need to match+index each Category to its respective distribution of methods...

Let's start by using the Excel "Match" function to find where the Category's Sale ID can be found in the list of payments.

On my excel sheet, the "Category" Sale ID is in Column N, the "Payment" Sale ID is in Column B, so my Excel formula looks like this in cell R2:

...and it shows me that the corresponding Sale ID is in Row 3...

...so far so good, let's drag or copy+paste the formula all the way to the bottom of the category data...

and it's looking good.

Next, we want to Index from each of the payment columns the corresponding percentage. We will want a different column for each payment method, so in Cell T2, I will use the following function

This way, when I drag the formula to the right for each payment method, the payment method column will change, but the Index source (Column R), will stay locked in place.

So when we drag the formula to the end of the data, we get a corresponding percentage value for each Category on each Invoice...

We're almost done!

Now, what we want to do is multiply each category's total against its respective payment amounts.

Most of the payment amounts we see are 100% or 0, although we're seeing some split payments on the list.

So we'll create one final calculation, this time in column AE, multiplying the Value in Column P by the percentages, starting with column T

So, my Excel formula in cell AE2 is

This way, when I drag my formula over, it will follow the dollar amount from Column P, and apply the percentage amount from all the payment columns: T-AB

Like so...

So now, to finish preparing our data, all we need to do is reference our Product Categories one final time...

That we can find in Column O...

So now, we have a full set of data for Categories with Payment Methods distributed proportionally among them per sale, we can complete our analysis using a pivot table...

a sumif function...

Or any other way you wish!

Discerning minds may notice that the totals from the Payments and Sales reports may not match. This does not

Calculations Home]]>

As I mentioned in the discussion on best practices, asking for help is one of the best ways to find the best approach for calculations. This process of turning Dusty Inventory into a Pricing Rule is the result of a collaborative effort with my colleague Bryn Harris.

Let's say we're looking at our Dusty Inventory report, but we want to take some actions within Retail to discount the prices or add the Products to a Price Rule.

We could use Calculations to either Define a new Price, or to add a tag to all the Products to quickly identify them in Retail for further actions.

In this discussion, we'll look at creating tags for adding to Products in Retail.

How you define tags in Retail depends on what their desired workflow application is. I may want to search Retail for all Products that are "dusty" or I may want to find products that were dusty at a certain time. Let's start by creating a default "dusty" tag on the Dusty Inventory Report...

We can start with a simple version of the tag with no calculations, just the word "dusty" with a comma in between quotation marks...

this way, when I save my calculation, I get the simple "Dusty," tag on every product...

but let's say we also want some more detail in this tag, perhaps the date when the product is considered dusty...

Tags must be one string of characters with no spaces, let's say that today is October 22nd, 2018. A good detailed tag might be "dusty20181022" or "dusty2018oct22"

From our cell-based functions, we could use the "Concat" function to create a tag with the word "dusty", and then a calculation to create the date content in the tag.

Again, let's start just by creating the word "dusty" between quotation marks (this time, without the comma"

So that our report looks like...

and then, from our date-based functions, we could also add the "now" function to start creating the date components of the tag...

when we add "now", it returns the complete time-stamp...

which may be a bit much for our tag. So, also from the date-based functions, we could use the functions, extract_years, extract_month, and extract_days to pull just the content we're looking for...

So we start by entering just the formulae...

copying the calculation for "now"

and then pasting it between the brackets in the respective extract functions...

not forgetting to label everything...

So when we save, our report looks like...

and following our best practices, let's save the work that we have done here so far...

Now the dates are best if they are in sequence, so it's a good idea to create the dates as a number. To do this let's multiply the extracted year by 10,000, and multiply the extracted month by 100. Starting with the base formulae...

then copy+pasting the earlier functions...

Now when we save, the year, month and date...

can be added as one number. Pulling the three sources...

...copying...

...pasting...

..and fixing embarrassing errors, like extracting the hour and calling it the day...

...finally creating...

...so that when we save...

...we get one number, YYYYMMDD that will work whether the month or day has one digit or two...

And let's save our work so far As New...

Now a lot of the starter calculations here are kind of redundant, I'm going to remove "now"...

and all the "extract" functions...

So we just have these three calculations...

So now, we just need one final calculation to create the final tag: dusty+20181022

We can use "Concat" to do this.

Starting with the function and brackets...

...then copying the content, first "dusty"

and pasting...

...then our date number, copying...

...and pasting...

So now when we save...

...we get one tag combining the date and the dusty status.

Once again, let's save as new...

Now the Retail Import tool will let us import-add multiple tags onto products

First, if the tags are separated by commas, and secondly, if the title of the column is "Add Tags"

So we could add in our "Concat" function the first "dusty" tag, eg..

Let's also swap out the Product Description...

for the System ID

This will make mapping our new tags to our Retail products much easier

Finally, let's get rid of all but the last calculation...

So that all we have remaining is the System ID, and the new tags

and once again let's save as new...

Finally in Analytics, we can download our file...

...as a CSV...

To make the export file something that works in Retail, there are a few adjustments we want to make. To do this, I'm going to open up my CSV in Excel...

First, we want to get rid of the line numbers in column A. So let's select Column A...

...right-click or control-click, and delete...

also, our System IDs have shown up in Scientific notation. To fix this, let's select them all in Column A

...right-click or control-click, and select "format cells"

and we're going to save as a number with zero decimal places...

So our data now looks like...

And it's ready to import!

start a new Import...

...select our new file...

...and ask Retail only to update existing products...

If our data is prepared properly, we will get a preview showing us how the mapping will take place...

...and we can import our sheet...

...so that the new tags are added to the products in Retail.

We can test this now, if we go to Inventory> Item Search...

We can search for products by tag...

...and see our results...

Now we can do the same in Quick Edit Items, in Price Rules, in Reporting, and in other Retail tools!

9940

]]>

Let's say that we're using Analytics to follow Inventory and define action points. Maybe we used it to build the Dynamic Reorder Points report....

Now that we can measure what Products need to be reordered, and how many need to be reordered, we can have Analytics prepare the data for us to Import as a PO into Lightspeed.

First, we need to add a Product identifier dimension. The best to use is "System ID", as this is a unique ID in Lightspeed that is never duplicated...

Let's run the report again to include the System ID...

Now our report looks like...

So we have the data that we need to import a PO, but we need to make a few changes.

First of all, not

What we could do is use an IF statement to nullify Products that do not need to be reordered...

For example, if we are not reordering at least one of a certain product, make its System ID null...

Following our best practices, let's start our calculation with brackets...

...then copy the calculation from our Dynamic Reorder Points value (or whatever formula you may have designed to define your own reorder points)...

then paste it into our IF criteria...

saying that if this value is less than one, return a null cell. If not...

Return the System ID (which Analytics reads as

So when we save...

We get a column on the right hand side that stops returning System IDs once the reorder amount is less than one.

Now, we could use the same calculation to return the amount to reorder if that amount is not less than one...

I'm going to title this calculation "Order Qty" as Lightspeed will automatically recognize this column header when we import this spreadsheet...

And again we'll copy the formula for determining the amount to reorder...

Paste it into the criteria...

and paste it into the results if not less than one...

So that when we save...

So now, let's save our work as a new report...

From here, all our report needs is cosmetic.

Let's start by hiding the dimensions and measures from visualization...

Because we only need them if their result is greater than 1...

let's also remove all except the last two calculations...

Their formulae are implicit in the final two calculations, so we can safely remove them from the table now...

And again, let's save this report as new...

From here, we can download the report...

and we'll select the option of CSV "with visualization options applied"

If we want to see what the downloaded file looks like, we can open it here...

Now, we can go to Retail, and create a new PO...

and when we save...

We'll see the option to Import...

...from here, we can select the file that we downloaded from Analytics, and then pull its products and order quantities onto the order in Retail!

]]>

if i want to look up a product by name, i use this PHP to generate the API url string:

"Item.json?limit=10&description=~,".rawurlencode("%".$search."%");

(i tried plain 'urlencode()' and double encoding 'urlencode(urlencode())' too)

when $search includes an ampersand &, no products are found.

i.e. search for 'B&W Speaker' does not give me a result, but 'W Speaker' does.

any ideas in how to encode a string with & in it properly?

thx!

maarten]]>

Is this correct? If so where can I find a report that shows me the nett units?

Thanks]]>

One of the most common ways of reading your data in Analytics is by the Product Category. However, what if we have lots of products in Lightspeed that don't have any Category associated with them? Can we find a way to use data that we already have, and then use it to define the Product Categories in Retail?

Yes we can.

Lightspeed has an Import tool that we can use to update up to 1000 products at a time, including their field for Categories and Sub-Categories

A good thing to do first is to create your Categories in Retail.

You can find the steps for doing this here:

We can use Analytics to identify Products without Categories in the Uncategorized Inventory report.

Let's start by adding the dimension of Item> IDs> System ID...

This is a good way to identify which products need to be updated when we Import the changes into Lightspeed.

Now, thinking about our IF calculations, we want to find some data in Lightspeed on which to base our Categories. This may not always be possible, but there are a few instances where it

If all the products from the same Vendor should be in the same category,

If all the products from the same Manufacturer should be in the same category,

If all the products from the same Vendor that were received this week should be in the same category,

If all the products that contain the word "Jacket" in their Description should be in the same category,

If all the products that

...and so on.

To create a few examples, let's add Vendor, Manufacturer, and Days Since Received onto the report...

To start, let's say that everything we get from the Vendor "Hawley"...

goes under the Category "Wheels"... (let's leave the negative instances out for now)

and perhaps the Sub Category of "Aluminum Wheels"...

So that when we save, we'll see...

Ok, so far so good.

Let's save a working version of this...

So now, let's say that our Products from the Manufacturer "Wheels Manufacturing" are also in the Top Level Category of "Wheels", but perhaps in Sub Category 1 of "Custom Wheels", we could go back to our first two calculations...

...and add the second IF statement...

So that when we save...

It's looking a bit better...

Maybe

Let's add starter individual IF calculations to identify the products that are from the Vendor "Quality Bicycle Products", and to identify those received in the past 30 days...

So when saving, we see two new columns...

Now, remember from our Logic Functions, the "And" statement...

We'll ask to look at the first argument...

...then the second

so that when saving...

We get a new column highlighting when both criteria are met!

So we can use our newest calculation as an argument in our IF statements for Top Level and Sub Category...

so when saving...

Now the only thing we need on this report are the categories, and because the rules are implicit, we can remove the calculations we built to find the Quality Bicycle Products that were 30 days old or newer...

and when done, let's save our work as new...

Now that we have prepared the Categories and Sub-Categories for importing, we can make a few changes to this document for importing back into Lightspeed.

First, take a look at the article about Importing into Retail

To prepare our data, we want to hide all the columns from visualization except System ID, and the Category and Sub-Category calculations...

So that all we have are the columns for identifying or importing...

So now, we can download the data from Analytics...

I'm going to save mine as Excel...

and we can see our data ready now if we open the Excel file...

Now, we can follow the steps for Import/Updating Products into Retail!

Calculations Home

]]>

Analytics will let you add a dimension of "Top Level" category, or will let you add a dimension of the full category, including its complete subcategory. Currently there is not a way to have Analytics report on only the first subcategory, but we can add a calculation to change the full category into just the top-level and secondary subcategory.

Let's start our report by looking at recent sales:

And let's change the dimensions on the report from "Completed Date" to "Category"

Some of these categories are one layer deep, some are two layers deep, some even go further.

What we want to do, from our cell-based functions, is use the "Substring" function to slice a category at a very precise location, before its

This is the best way I have found yet*:

- What is the length of the first category?
- If there is a forward slash following this, what is its length?
- What is the length of the first sub-category?

Let's start by using the "Position" function. Position requires two arguments: which column we are looking at, and what content we are looking for.

In this case, we're looking at the Category...

...for the forward slash, as this is how Retail distinguishes hierarchy between category levels...

So when we save...

We get a number showing us where the first slash occurs.

Some of the categories here no forward slash at all, such as "Labor" in the example above. In these cases, we determine the length of the first word by simply using the "Length" function.

For the categories that

"Components" for example, is 10 letters long, the forward slash is the 11th character in the string.

So we can build an if calculation to determine what to do next.

If the first slash is zero, length, if not, the position of the first slash minus 1.

Let's start simply with brackets...

...and we'll copy the first calculation, searching for the first slash...

...then paste it into the IF criteria below...

now our positive results, if the position is zero, return the length of the full category...

then if the position is not zero, return the actual position...

...minus one...

So when we save, it shows us the length of the first word before any slashes...

Now we should test this, to determine that it is producing the desired effect. We can test it by starting the substring function...

Substring requires three arguments, what field is being looked at, where the substring begins, and where it ends...

What we're looking at and where it begins are easy: category, and position zero...

Where it ends is what we're testing from the calculation above. Copying...

...pasting...

...saving.

We're checking for two things, are the words all complete? And are there any additional characters after the words (such as, the forward slash)

Our results look all right, so let's save our work so far...

This, we expect, should not be difficult to determine. Every slash would only be one character long if it is to be found at all.

We know that it's in a category if its position is greater than zero. We know it's not in a category if its position is zero.

So, this too, becomes an easy IF calculation...

And our source for its position comes from our first calculation...

Pasting...

And saving...

Where it exists, we have a 1, where it doesn't we have a zero.

So far, so good. Let's save our work to here.

Now is where it gets a bit trickier.

Similarly to finding the length of the first word, we want to find the the place in the category where the second forward slash occurs. To do this, we want to rebuild the content after the first slash.

Again, we'll want to use the substring function to return this content, but instead of beginning at position zero, now we are starting at the position of where our first forward slash is.

BUT, if we start in that position, it will begin with the forward slash and then the subcategory. Eg:

What we want to do is start in the position plus one.

So, Substring...

...beginning...

...from our first position plus one...

...and then ending with just a really high number for a category length. I'm going to say 100, but this is arbitrary.

So that when we save...

We have all the content after the first forward slash.

Let's save our work as new.

Now, to complete this step, all we need to do is apply the same process as we did in step one. Beginning with the position of the slash...

in the remaining content...

so our final calculation looks like

And when we save, we get the same logic of numbers, 0 or some integer...

Let's save as new.

And again, our if statement logic will be:

If the position of the second slash is zero, return the length of the remaining content.

If the position of the second slash is not zero, return the position of the slash

Starting:

If the position...

is zero, return the length of the remaining content...

...into the calculation...

if not, return the position of the slash...

minus one...

So when we save...

We get the length of the second word.

Again we can test this...

Using Substring, we can look at the remaining content, starting at zero, and looking as far as the length of the second word.

For the sake of brevity, if you are curious, this is the calculation...

and here it is in action

So now we have determined the length of the first word, the length of the bracket, and the length of the final word.

Perhaps we would be interested in only retuning the category labels themselves, First Category and Second Category, for further analysis.

For the purposes of this discussion though, I'm going to remove all the calculations except the Length ones...

...so our report should now look like...

and let's save as new...

Now, all we need to do is add the three calculations...

First...

plus...

second...

plus...

third...

and...

when saving...

We get one number.

Let's save this again...

Let's add a new calculation for substring of category...

starting in position zero...

then ending in our mess of calculation from the previous (too big to read now**) field

pasting...

and saving...

and saving...

So the work is done now!

Now, all we need to do is cosmetic. Let's remove all the calculations, except the last...

And let's hide the Dimension of Category from Visualization...

Giving us a final data set that we can run, export, and then easily manipulate for further analysis!

**Spoiler Alert: Our final calculation here is:

substring(${categories.full_path_name},0,((if((position(${categories.full_path_name},"/"))=0,(length(${categories.full_path_name})),(position(${categories.full_path_name},"/")-1)))+(if((position(${categories.full_path_name},"/"))=0,0,1))+(if((position((substring(${categories.full_path_name},(position(${categories.full_path_name},"/"))+1,100)),"/"))=0,length(substring(${categories.full_path_name},(position(${categories.full_path_name},"/"))+1,100)),(position((substring(${categories.full_path_name},(position(${categories.full_path_name},"/"))+1,100)),"/")-1)))))

Calculations Home

Knowing how much inventory to order needs to look at: expected demand for products, current availability of products, expected availability of products, and any discrepancies.

- Current availability can be measured by
*"Quantity on Hand"* - Expected availability can be measured by
*"Quantity on Order"* - Expected demand can be created from
*"Days to Sell out"*on the Low Stock Report

As I mentioned in the discussion on best practices, asking for help is one of the best ways to find the best approach for calculations. This method of following inventory movement and from it harvesting action points is very much inspired by Tomer Shavit's work.

If we open up the report, we'll see a metric called "days to sell out", this is looking at the average sale intensity over the past 90 days, comparing to the current level of inventory, and identifying how many days' worth of inventory is available.

So now, how much should we re-order?

The default filter for "days to sell out" on the report is less than 7, but we can change the filters on this report to look at more considerations...

Let's start by changing the "days to sell out" filter from 7 to 60, giving us a wide view of products that have upcoming action points...

So this gives us a working list of products that we expect will need some attention in the next 60 days. Now, seeing the measures for "Quantity on Hand" and "Days to Sell Out", let's calculate the average daily sales volume.

To do this, we divide the quantity on hand, by days to sell out...

Which produces a number for us, recreating the expected daily sales amount.

Now, our question becomes, how much do we re-order?

To answer this question, we need to ask another:

If we want sixty days worth of inventory, then we will need to multiply the expected daily average by 60. If we want 14 days worth of inventory, we need to multiply the expected daily average by 14, and so on. (Choosing how many days' worth of inventory you want to have in stock involves asking some more questions: are there fixed costs with each individual shipment? (if so, ordering for greater timeframes brings these costs down), are there extra costs in storing inventory that isn't selling yet (if so, order for shorter timeframes brings these costs down). This will vary between Vendors and Categories. In Analytics, we can account for many of these...

Let's say we want 30 days of Inventory in stock. We'll create our desired level by multiplying our "Daily Average" by 30, starting with brackets:

Then copying our calculation for the average daily volume...

...then pasting into our 30-day calculation...

This gives us a number for our desired 30-day inventory level...

Now, in accordance with our best practices, let's save this report.

So now we know what our desired inventory level is, but we have figured out how much to order. This becomes simple, we subtract our available Inventory from our desired inventory level.

So we'll start a new calculation, using brackets...

Copying our calculation for desired inventory level...

...pasting...

...then subtracting the quantity on hand...

and saving...

So now this highlights the difference between the desired amount and the available amount, or, the amount required!

We can now sort the table by this amount, bringing up the products that require the greatest amount re-ordered...

We could use calculations to retrieve these results for us in a meaningful way.

Let's start by opening up Recent Sales, then setting the filters to go back to the past 400 days...

Then to start with this month, let's go to our date functions and extract the month and extract the year...

...from the sale completed month...

...whence upon saving...

Let's also use the add_months function...

...to add -1 months to the sales date...

giving us an idea about the previous month...

Then, in one shot, let's extract the month and the year...

...from the the sales date one month ago...

so that when we save...

We get numbers for the respective years and months!

Finally, let's add a calculation to define last year, this is simple. It's this year, minus one.

Starting with brackets...

...copying "this year"...

...pasting...

...and saving...

And this brings us to a good spot to save our changes to the report so far...

Now, in one shot, from our table-based functions we're going to add FIVE indexes...

...all looking at row 1...

...and to these, we are going to ask Analytics to look at Row 1 for:

- This Month
- This Year
- Last Month
- Last Month's Year
- Last Year

...pasting them into their respective fields...

...so that when we save, we have columns that reference all the data we need to create the dates

And let's save this as a new version of our report

Now we can build the build the dates that we want to arrange for this month, last month, and this month last year...

We'll start with this month, again using the Date function...

The first number we want is the year, so this will be the calculation we used to index "This Year" from row 1...

...which we'll copy, and then paste into the date calculation...

...next we'll need the month, so we'll grab the indexed "This Month" from row 1...

...and then paste into our date...

...now the last number we need is the day. To have the days rise in sequence, let's add Row()...

Now when we save...

We have a the first re-creation of this month's days.

Let's save the report as new...

Next, we'll add a new date function...

And (fast forward), we're going to add the year from the indexed "last month's year", the month from the indexed "last month", and the day from the Row again...

so when saving we get a recreated sequence for last month...

and we'll save as a new report again...

Then we'll add a new date for this month last year, sourcing the indexed "last year", the indexed "this month" and the row...

...and save...

...and save...

So now we have all the dates we need to build our comparisons.

Do you feel like a coffee? I'll wait here if you need a minute. You've done a lot of hard work up to this point.

Ok cool.

Now, this is unorthodox, but to facilitate this discussion, I'm going to remove all the calculations except our final dates...

Everything is implicit in our date calculations...

So they will still work all by themselves...

So from here, it becomes simple.

Let's add three calculations to match against the completed date...

One for this month...

and for last...

etc...

So that when saving...

We see where each date on our table occurs from our completed dates column...

And save.

Now, let's add three calculations to index from the sale line totals...

the positions that we produced in our matches...

...from our different respective months...

...and save...

...and save...

One last step with the calculations. From our math functions, let's now add three running totals...

one for each month...

...as we did prior...

and save...

...and save.

So that's it for calculations! The rest of our report is now cosmetic.

Let's remove all the calculations except our running totals...

Let's hide the "Completed Date" and "Sale Line Total" columns from our visualization...

Let's also limit our visualization, just to the first 31 rows of data...

and let's see what this looks like on a graph...

We're almost there, but it looks like we forgot something; labels...

My graph doesn't know how to measure the X or Y, so to make it simple, I'm going to add a "Day of the Month" calculation, just using Row()

So when I save...

That looks better.

And there we are!

We can use calculations to build totals around differing timeframes to prepare for these kind of analyses.

As I mentioned in the discussion on best practices, asking for help is one of the best ways to find the best approach for calculations. This way of preparing differing timeframes is very much inspired by an approach that my colleague Max Dunlap developed.

Let's start with the Recent Sales report, but change the date filter to look at the past 13 Months.

Let's change the dimension of Sale Date to say, Top Level category.

Let's also pivot on the Sale Month. This should give us a report giving us monthly totals by category.

Let's also sort so that the most recent months are to the right, and that the oldest month is at the left...

Let's start simply by finding our totals this month.

From our pivot functions, we can use the pivot_index function to highlight one of these columns.

Because we are looking at the previous 13 months, and because we are sorting months from oldest to newest, "this month" will always be in column 13.

So when we save, we get a new column highlighting just the most recent month.

Similarly, we could use the same calculation to pull up last month's sales too...

Only we would look at column 12 instead of column 13...

So far so good, let's save this as a new report...

Now let's see if we can recreate year-to-date. A good way to start, looking at our date-functions, would be to extract the year from the completed month...

So for each column, we can determine which year the sale took place in...

Now this year, must be the largest year on the table, so we could create a pivot-offset list looking at all the years in the row. To do this, we'll start our calculation with brackets...

We'll ask it to look at the extracted years...

...starting in its own row (0)...

...then looking up to 13 rows to the right...

...so when we save...

We get a big long list of all the available years. Now the only thing we need from the list, is to know which year is the highest. From our math-functions, we could use the large function to return the largest value from the list...

Again, we'll start our calculation with brackets...

...asking it to look at our list of years...

...and identifying...

...position one.

So when we save, we now get one number for the current year, and another number for the largest year.

Again, let's save as new

So, now this becomes a simple if function, if the current year is the same as the largest year, it should be included in our year-to-date totals. If not, it should not be.

So let's build this using brackets...

If the sale year...

...is equal to...

...the largest year...

...then show us the sale-line totals. If not, don't.

Now, when we save...

We get totals for this year, but not for last.

Again, let's save as new...

Now, we want to get the row totals for the sales this year. Again, we can use a pivot_offset list to return all the monthly totals. As usual, brackets...

...copy...

...paste...

...save...

We're almost done. Now that we have the list of monthly totals, all we need is to find the total of this list. We could use "Sum" to do this...

...and by now...

...you know the drill...

...so when saving...

We get one number.

Again, let's save as new...

...and again, as per above, I'm going to delete the list from our report for now

Finally, the same that we did for sales this month and last month, we can now pivot_index...

...the sales from this year...

...oh let's make sure to pull the results from column one. Because the list is reading from left to right, results at the end of the report will be incomplete...

So upon saving...

Also, we could pivot index the sales from our first row, because it will

\]

Giving us three comparisons to include against this month's sales...

Let's save this report as new...

From here, the data is done. All that's left is cosmetic...

The "Year" calculations should all be implicit now in our totals, so we can remove them...

...as we can the monthly totals...

...also, we can hide the measures from visualization...

So that our final report is only looking at the measures we want to see...

And so that we can visualize in a meaningful way...

...or compare in other ways...

Let's also add System ID as a dimension, to distinguish instances of Products with the same name from one another...

Then, to rule out case-sensitivity, let's use the "Upper" calculation from our cell-based functions to convert all content in the Product Description to upper case...

So when saving, everything is presented in capital letters...

The next thing we want to do is ask Analytics to find the

And we are going to ask Analytics to match the upper case descriptions...

...

...this should show us the row that each Description first shows up on, and we should expect to find that in

...the first instance is identical to the row.

Most cases.

It's the ones that don't match their row that we want to find now.

Let's save our first version of the report...

So now, let's ask Analytics to identify where the first instance is

Let's create an IF statement. If the row is the same as the first instance, return a 0, if not, return a 1.

So we'll start the IF statement...

Copy+paste the "First Instance" calculation...

and compare it to the row...

So that when we save...

We get a new calculation highlighting duplicates over others. Again, let's save this report as new...

Now we don't want to just identify the duplicates. We want to pull them out in a meaningful way. To start this, let's have the duplicates rise in sequence. We could do this by using the running_total function from our math functions...

and ask it to add the 1s from the duplicates...

so that when we save...

...with each new duplicate on the list comes a new number in sequence.

Let's save our work so far as a new report...

So with each new duplicate comes a new integer: 1, 2, 3, etc. Does this remind you of any other sequence? If you guessed rows, you're right!

That means we can ask Analytics, by row, to find

We'll start it with brackets...

First, we'll ask it to look at the Row...

...then, we'll copy the running total of duplicates...

...and ask it to look at this sequence...

Now look at the numbers that show up on our table...

It tells us that the first duplicate is in row 298, that the second duplicate is in row 409, that the third is in row 700, and so on...

We're almost done, but once again, let's save our work as new...

Finally, let's use the Index function to pull up these Descriptions so that we can now find them in Lightspeed!

We'll start our Index function with brackets...

Then ask it to look in the Descriptions column...

...for our matched position...

...of rising duplicates.

So now when we save...

We get a list of only duplicated Products!

Let's save our work again as new.

There are a few cosmetic things we can do now to tighten up the report. First, we can remove all but perhaps the final two calculations...

So that gives us more real estate to play with...

Also, let's hide the first set of dimensions from the Visualization...

This is an important step because the Dimensions on the left-hand-side have

Finally, we could also ask Analytics to look for the System IDs for duplicate products, just by copy+pasting our final calculation...

...but substituting the index column from Description for the the System ID

This way, when we save...

We get one report that finds the results we need for our next steps in Lightspeed!

We can add some calculations to Analytics to organize this.

Let's start by just pulling up our sales by month for the past 36 months...

and then we'll take the "Sale Completed Month" from our report, and change it to our Financial Month.

From the date functions discussion, we could use the add_months function. Add_months requires two arguments: how many months forward, and which months we're looking at.

Let's say the start of our financial year is May. What we need to do is add the months to May that make it January. If we think backwards, December needs one month added to be January. November needs two. May needs eight...

Then the months we want to start from are the Sale Completed Month

So when saving, we have the redefined financial month...

But we want to read the results as a year, to start this, let's extract the year...

...from the Financial Month. Copying...

...pasting...

...saving.

...and let's save our work so far as a new report.

Next, we

We can do this, using the "Large" function from our Math functions.

We want to look at the Financial Year...

...find the largest...

which will be in position one. So when we save...

...also, we could proactively create a new calculation asking us to pull up the previous Financial Year

...so that when we save...

...and again, let's save this as a new version...

So now, let's turn these years into dates. Using our Date calculation, we can re-create dates that are somewhere on the list, and then organize them the way we want.

The Date function requires three arguments; the year, the month, and the day...

Let's copy the calculation for "This Financial Year" into the year position, use the row() function to define the month (month one will be row one, month two will be row two, etc), finally we can simply define the day as the number 1...

I've created two of these now, one for This Financial Year, and another for Last Financial Year.

Now when we save...

We have a table for a recreated year to read our results on.

So now, we have the framework to match+index the sales totals from the default sorting to our desired sorting.

Let's start by matching the Financial Year-Month (column 6 above) to the Financial Month (column 2 above)

We'll start building the calculation...

copy+pasting the Financial-Year-Month calculation...

and then copy+pasting the first Financial-Month calculation...

Now when we save, we see which rows correspond to our Financial Months for this Financial Year and Last Financial Year

And then using the Match function, we can pull from the Sale Line Totals...

each corresponding value for our Financial year...

So when I save...

When I do it for both years, we can see the values being pulled from our report into the right order

From here, all we need to do is cosmetic. Let's see what this looks like on a visualization...

now, let's hide everything except the last two columns...

then just for fun, let's copy the "Last Financial Year" calculation, but replace the -1 with -2

When we save...

It pulls up the corresponding months' results into the format that we want!

Let's say for example, that we are still interested in reporting on the top categories of sales by store, but we are using a pivot on store...

Using the "Large" function from our math-based functions, we can identify the largest value from a column.

The "Large" function requires two arguments: which column we're looking at and which position of large we're looking for, (is it the 1st largest, 2nd largest, etc...)

For our first argument, we'll ask the calculation to look at the sale-line total...

and for the second argument, we'll ask the calculation to look at the row number. This way, the highest value will be on top, the second-highest value will be in the second row, the third-highest value will be in the third row, and so on...

So my first calculation here looks like

If I have prepared my calculation properly, I will a column ranking the values from highest to lowest...

But it's only showing me the value that is highest, not the category that is highest...

So the next thing I want to do, is report on which row each value is coming from. For this we can use the "Match" function from our table-functions.

The Match function needs two arguments: which value we're looking for, and the column in which we are looking for it.

Following our best-practices, let's open up a new calculation, and start building it with brackets...

then we'll copy our first calculation, referencing what we're looking for...

...and paste it into the first set of brackets...

next, we'll add the second argument, where we are looking for it; the sales totals...

now my calculation looks like:

and if I have prepared it properly, when I save...

We now get a second column showing which row the value can be found in...

In the first store, this is unremarkable because the default sorting is from highest to lowest...

but in the other stores, we see that the referenced high categories do not line up with their respective rows...

Now, keeping up with our best practices, let's save a working version of this report...

Now on our report, we don't just want to know what row the value is on, we want to know what the actual category is.

If we know which row the value is on, we can ask Analytics to find the category from that row...

From our table-functions, we can use the Index function to do this. The Index function needs two arguments, which column we're looking at, and which row we're asking it to look in...

We'll create a new calculation, and again, we'll start it with brackets...

...and we'll start by asking it to look at the column of Categories...

...then we copy the calculation from the field above...

...and paste it into our final calculation below...

So our final calculation now looks like:

and if I have prepared it properly, now when I save...

We get a final calculation showing us the label of the category corresponding to the value amount!

Again, let's save this as a new version of the report

Finally, let's do some cosmetics to make it meaningful.

First, let's hide the Dimension "Top Level Category" from visualization, we'll be using the Calculation to show us the Category labels...

Next, let's hide the "Sale Line Total" from visualization, because we will also be using the Calculation to display these numbers...

Then, we can remove Calculation 2, showing us which row the values are on. This is not necessary to display, and is implicit in the final calculation, so it doesn't need to be on the report anymore...

Our visualization is looking a bit better...

Let's also change the order of Calculations, so that the Category shows up first, and the Total shows up second.

The best way to do this is start a new calculation, and then copy+past the first one...

into the last...

...then remove the first calculation...

...so that when we finally save...

we have a meaningful report showing us top levels of categories and totals by store!

Let's say that we want to rank categories of sales at each of our Multi-Store locations...

Now if I just concat the name of the shop and the sale line total...

I'll get a new value for sorting, but...

...it doesn't sort the totals as expected.

Do you see why?

Right! Now that the sales totals numbers have been turned to text, they are sorted by their

What we need to do is create a protocol for keeping the value of the totals. So that in the example above, 3,800 and 31,000 become 03825 and 31662.

This is easier than it sounds.

What we could do is add the same really high number to each of the totals, that way, their values are still in order, but the initial digit is less important.

Our top category here is six digits long, so if we were to add an eight digit number to everything, the numeric and text sorting would work as expected.

So let's start by adding 10,000,000 to each Sale Line total...

Now, when I save...

3,852 becomes 10,003,825, 312,480 becomes 10,312,480, and so on, so now can start our concat function...

We'll start a new calculation, and again from our best practices, start by creating the function just with brackets...

...we'll add the store name...

...then we'll copy the calculation from our first field...

...and paste it into the second set of brackets...

So, when I save...

Now our number sequence is preserved, so that when we sort...

our store grouping and sorting by dollar amount is preserved!

Now, because our first calculation is implicit in the second, we can remove the first, and we can also hide our sorting calculation from visualization

So that our final report only shows us the dimensions and measures that we want!

In this instance, our final calculation is:

The advantage to using this method is that you can custom-sort the stores instead of using Alphabetical sorting.

A drawback to using this method is that you will need to update the calculation as new stores are added to your list.

Similarly to our date sorting from the first method, we want to use really high numbers to as the basis of the store, and we want the highest stores in our ranking to have the highest numbers.

This is how it works:

Say I have three stores, and I want to focus on Store 6 first, in my IF statement, I will identify Store 6 as 30,000,000...

Then, I can look at my next store, perhaps store 5, and I will assign to it, 20,000,000...

Then I could either define Store 7 as being 10,000,000, or I could define everything else as 10,000,000 (in this instance, there is only one other thing to be "everything else"; Store 7.

So now when I save, I have a calculation showing me just the numbers...

Now, because the stores are numbers, we can add the sales total to this number for our final sorting.

Again, we start by creating the base of the calculation with brackets...

...select the calculation from above...

and paste it into the calculation below...

This gives us a final calculation creating the store and sales total as one number...

So that when we sort...

Our stores and sales totals are kept in the order we want!

Similarly, we can remove the first calculation, and hide the last calculation...

So that our final report keeps the sorting calculation hidden!

If I want to sort by multiple tiers then, what I need is a singular column that references both values, perhaps in a creative way.

Let's say that I want to see top categories by month over the past five months...

If i sort by Month, I lose my sorting by dollar sorting...

If I sort by dollar, I lose my month sorting...

What I need is a value that will sort the months first and then sort the dollar amounts within those months.

This is what we do.

The first thing we want to do, is turn the year and the month into its own number, (if we only extracted the month, then if our results took place between October and February, the sorting would be off)

From our date functions, we can create two calculations, one to extract the the year from the completed month...

...and a second to extract the month from the completed month

So far, this is nothing remarkable, when I save, it will show me a number for the year and number for the month...

The next thing we'll do is make the year and the month one number, and we'll do this by adding the two.

What I'm going to do first, is multiply the year by 100, so that 2018 becomes 201,800. This way, January becomes 201,801. December becomes 201,812. Last December becomes 201,712...

To start, following our best practices, I'm just going to add brackets to hold the place for my calculations above, one extracting the year, one extracting the month...

I'm going to multiply the first calculation by 100, so I'm going to add that into the brackets first...

Then we'll select the content from our Extract Year calculation above...

and paste it between the first brackets...

then I will copy the Extract Months calculation from above...

..and paste it into the second set of brackets...

Now this will look a bit funny, but when I save, I will get one number for the year and date...

Again, following our best practices, this is a good time to save version one of this report

Once we've Saved as new, we can begin the next step

The next thing we'll do is get the date number ready to absorb the dollar number. Our dollar numbers get to be about six digits long, so to make space, let's multiply the date by eight digits, or 10,000,000.

Again, we'll start by building the base of our calculation around brackets...

Now, we'll copy the compound calculation from above...

...and paste it into the new one...

...so that when we save, we get a similar number, just multiplied by 10,000,000

...and again, this is a good time to save a new version of this report. Remember to

Now we get crazy. We're going to create one final calculation, including the dollar amount in the date amount...

First I start my calculation with brackets...

Then we copy the calculation prior...

...and paste it in between the brackets below...

Now, when I save we have one number valuing, in sequence; the year, the month, and the sale value

Just to make sure it works, let's now sort by our last calculation...

And we can see now, the values sort the dollars within the month, then sort the values within the previous month.

Let's save this as a new version...

Now finally, let's get rid of all the starter calculations that we don't need any more...

Our calculations are all implicit, so we can remove one without it breaking the others...

and when only the last one remains (I've re-named it now), we just need to hide it from visualization

So while my report includes the calculation for sorting, the final report being that I receive via email, does not.

The final calculation used here is: