ONYX REPORTING LTD.
  • Welcome
  • Services
  • Blog
  • YouTube Channel
  • Contact
  • Domo IDEA Exchange
    • Schedule
    • Call for Presenters
  • Welcome
  • Services
  • Blog
  • YouTube Channel
  • Contact
  • Domo IDEA Exchange
    • Schedule
    • Call for Presenters
Search

VIDEO:  10 tips to Polish your Dashboards & Simplify Charts

28/1/2021

0 Comments

 
I had the opportunity to appear as a guest speaker on Janice Taylor's webinar series on Dashboard Design and Best practices, https://www.reportsyourway.com/.  During this 15-minute presentation, I reviewed 10 easy checkpoints dashboard designers can implement to produce concise and effective Dashboards.

Read More
0 Comments

Preventing Stockout and Improving Supply Chain Logistics with Data

21/5/2020

0 Comments

 
Picture

The Problem.

Acme Corp is a national distributor of electronics and hardware to small and medium-sized companies.  Because size and cashflow prevent Acme Corps' SMB customers from carrying large inventories, they frequently stockout or request expensive last-minute air freight deliveries.  

Acme Corp recognized that by combining customer sell-out with product shipment data, they could apply forecasting models that would automate and optimize product replenishment for their SMBs.  This would deliver a better customer experience, while preventing stockout and creating new opportunities for optimizing product distribution logistics.

Unfortunately, although SMBs were happy to send till data to Acme Corp, the Domo implementation team did not have access to reliable starting inventories at the customer locations, so they approached me to design a process for deriving a starting inventory.

In this tutorial, we'll:
  • build a dataflow that derives starting inventory
  • learn why certain ETL (blocking functions) tiles have such a drastic impact on query performance and how you can optimize around them.

While this blog post is more developer-focused, it's important to recognise that these types of data normalization activities are the bridge step to building forecasting models and implementing workflow automation.
  • Ex.  If we build a forecasting model around customer sales but don't consider available inventory, it's impossible to gauge whether the lack of sales is 'normal customer behaviour' or attributable to stock out.
  • In a similar vein, before 'advanced analytics' can begin, we must carefully evaluate whether the available data represents reality.  If we don't apply domain expertise to the problem, we might dismiss rows with negative inventory as invalid or outlier measurements and therefore exclude them from our forecasting analysis.

With any project like this, validation is almost more important than the solution designed.  Make sure to identify and confirm assumptions with the domain experts before proceeding to the next step.

​In any case... on to the tutorial!

Read More
0 Comments

Domo - Parsing JSON in a MySQL 5.6 Dataflow

6/5/2020

0 Comments

 
Need to parse a JSON string in a MySQL dataflow?  See the tutorial below.

Doesn't MySQL support JSON specific transforms?
Yes, however, Domo's MySQL 5.6 environment predates JSON parsing support which was introduced in MySQL 5.7 and expanded in MySQL8.0+.

Are there better ways to handle JSON parsing in Domo?
Domo's ETL and visualziation engine require data structured in a relational format (one value per field). Users can use custom connectors, Python scripting or MagicETL to parse large string blobs, which should scale better than parsing the same data in SQL transforms.

Can I do this as a stored procedure?
Yes, see the Domo KB Article. 

At scale, stored procedures can be tuned to outperform SELECT ... INTO table; however, Onyx Reporting recommends the table approach during the initial implementation because the code may be easier to parse and troubleshoot than Dynamic SQL.

Read More
0 Comments

Your 9-Point Data Pipeline Resiliency Check

20/4/2020

0 Comments

 
New data initatives and BI projects are a fickle thing.  You only get one shot at making a good first impression with end-users and senior stakeholders, and the last thing you want them saying is, "I don't trust these numbers."
  • IF your data pipeline is held together with 'scotch tape and bubble gum,'
  • OR you can't go on holiday because the world / your pipeline might go up in flames
  • OR you're having data trust issues in your user community
 
  • THEN it might be time to review your data pipeline and make sure it meets these 9 resiliency checkpoints.

If you have stake in project adoption but most of the checkpoints read like technical jargon, give me a call.  I'd be happy to sit with your developer team and co-review your data pipeline.

Read More
0 Comments

5+ Methods for Getting Data in and out of Domo

16/4/2020

0 Comments

 
Picture
An IT/IS stakeholder raised to me their concern that Domo, like many BI vendors, was a one-way street: a tool where you can easily push data in and not get data out.  The underlying narrative: both business and IT stakeholders see vendor-lock as a risk to minimize.  I assured the gentleman, in addition to extensive dashboard creation and distribution capabilities, Domo positions itself as a data distribution hub by providing several data extraction methods suited for a broad mix of users with different usage requirements. 


TLDR for Data Governance teams: My most successful enterprise clients position Domo at the end of a data lake or data warehouse pipeline then use built-in tools to secure, monitor and distribute data to different stakeholders.  Domo's distribution methods get data into the hands of data quality and governance teams, data scientists, and business analysts in the platforms of their preference, which of course ranges from Tableau and Qlik to Jupyter notebooks, Office 365 products, or any other SQL or data storage platform.  IT/IS accomplish this without increasing administrative overhead by applying security measures in Domo which trickles through all the data distribution options outlined below.

TLDR for Developers: Domo facilitates systems integration, business process automation, app development as well as data ingestion and distribution via SDKs and CLIs which leverage an openly accessible API framework, including Java, JavaScript, Python and R libraries.  In addition to storing data in Domo's cloud, it also supports a federated query model to leave data in source location.  Check out developer.domo.com.  I could go on for days.

Read More
0 Comments

Data Science + Domo: KMeans

18/8/2018

0 Comments

 
Disclaimer:  Let's get it out of the way.  I am employed by Domo; however, the views and ideas presented in this blog are my own and not necessarily the views of the company.

In this post I'm going to show you how to use Domo with it's built-in data science functions to perform a KMeans clustering using the Hotels dataset from Data Mining Techniques for Marketing, Sales, and Customer Relationship Management, 3rd Edition, by Gordon S. Linoff and Michael J. A. Berry (Wiley, 2011).  Then I'll show you how you can do the same thing using Domo's platform to host the data and R to transform and analyze the data.

Why are we doing this?
From the platform perspective, my goal is to showcase how Domo can support Business Analytics and Data Science workflows.

From the business perspective, an organization may want to cluster Hotels to facilitate creating 'hotel personas'.  These personas (ex. luxury business hotel versus weekend warrior favorite) may enable the creation of marketing campaigns or 'similar hotel' recommendation systems. 
  
​Disclaimer 2:  I do not own the Hotels dataset.

The Domo Platform Organizes Datasets

Step 1:  Upload the Hotels dataset to the Domo datacenter using the file connector. 

Duration:   < 2 minutes or 8-ish clicks.

Domo enables analytics workflows by assembling all the recordsets in one place.  The data can be sourced internally from a data warehouse, POS system, Excel spreadsheet or SSAS cube or come from external sources like  Facebook, Google Analytics, or Kaggle.
Picture
Once you've uploaded data to Domo you can apply Tags to keep track of your datasets or implement security to control who has access to your data (even down to the row-level).

Additional notes for InfoSec:
Given that the largest risk in data security is user behavior, finding solutions that are preferable to personal GitHub or Dropbox accounts, USB sticks or (God forbid) the desktop remains a priority.  One of Domo's most understated highlights is its ability to surface IT governed and cleansed datasets to analysts in a single place that's highly accessible yet fortified with bulletproof (Akamai) security.

Notes for architects and engineers:
Under the hood, Domo's platform takes the best of big data / data lake architecture (distributed, high availability, etc. ) and (good) datamart recordset management techniques.  Data workers familiar with Hadoop or Amazon web services will recognize Domo as a modular yet integrated big data stack wrapped in an easy to use GUI that business users will appreciate and get value out of from day one.

When you set up a trial of Domo you effectively have a free data lake that's ready for use -- complete with the ability to store literally millions of rows and/or gigabytes of data at rates you'd expect from Amazon S3 or Microsoft Azure storage at a fraction of the cost.  Try it.
Picture

Introducing DataFlows

To the horror of every data scientist out there, in this blog, I'll skip data preprocessing, profiling or exploration and go straight to using Magic ETL to apply KMeans clustering on a set of columns in my data.

Side Note
ETL stands for Extract, Transform and Load, and Domo provides a proprietary drag-and-drop interface (Magic ETL) for data transformation.  Technically the data has already been extracted from your source system and loaded into Domo, so all we're left with is transform phase.

Double Side Note for Clarity:  
Each DataSet is stored in Domo somewhere as a separate file.  
Each DataFlow has Input DataSets and creates Output DataSets - new file(s)

You have 3 types of dataflows native to the Domo platform.
  • Magic ETL (Domo's proprietary drag and drop tool)
  • MySQL and Redshift.  NOTE:  Redshift is a beta feature for transforming 5 million+ row datasets.  If you need the feature 'turned on' give me a holler.
  • Additionally, Domo supports data ingestion via API, ODBC, and various other SDKs.  In other words, you can transform data using virtually any platform before pushing it to Domo. -- In this blog, I'll do some data analysis using R and then push it to Domo from R.
Picture
Magic ETL has a range of user-friendly set of Data Science functions.  They are in Beta, so if you don't see them in Magic ETL, shoot me an email so we can get it enabled for you.

What is Clustering and why are we using it?
It is not my goal to make you a data scientist in a 5-minute blog post, but if you are interested, the Data Mining Techniques book I linked earlier may be a decent place to start.

Consider the following (completely fictitious) example:

"Jae, given your stay at the Threadneedles Hotel in London, you might like the Ace Hotel in Portland which has similar review rates by international guests, number of rooms and avg. room rates."  

In short, clustering allows you to group entities (hotels) by sets of attributes (in our case, number of rooms, number of domestic reviews, international reviews, whether the hotel is independent etc.).
Picture
How easy is Magic ETL? 

It's 1, 2, 3.  Drag transformation functions (1) into the workspace (2), then define the function parameters (3).

In this example, I used the previously uploaded hotels-wiley dataset as an input, then I applied a K-Means function over a subset of columns.

Note:  I explicitly defined the number of clusters to create (5).
If you don't know what KMeans does, Domo's documentation is on par with what you'd expect from any data science-lite platform.
Picture
Step 3:  Preview and Output the Dataset for visualization or further analysis

​
In the final steps, we'll add a SELECT function to choose which columns to keep.  In this case, we'll only keep columns that were used by the KMeans algorithm, as well as columns to identify the hotel (hotel_code).

Lastly, we add an Output Dataset function to create a new dataset in Domo which we can later visualize or use for further analysis.
Picture
 Beware the false prophets ...

THIS IS IMPORTANT. 
Do not fall into the trap of misinterpreting your results!

In the output dataset below, you'll see we have a new column cluster which happens to be next to a column bubble_rating.  The hasty analyst might conclude: "oh hey, there appears to be a correlation between cluster and hotel ratings." ​

​And all Data Scientists in the room #facepalm while #cheerForJobSecurity.
Picture
There are 5 clusters because in an earlier step, we told the algorithm to create 5 clusters.  We could have easily created 3 or 7.  There is not necessarily a correlation between which cluster a hotel ended up in and its rating.  Keep in mind, cluster number is just an arbitrary numbering.  If you re-run the function, ideally, you'll end up with the same groupings of hotels, but they could easily have different cluster numbers.  (the hotel_cluster_3 could become hotel_cluster_4 and hotel_cluster_4 could become hotel_cluster_1)

Side Note which will become important later:
It would be nice if Domo included metrics for measuring separation between clusters or the strength of clusters.  We arbitrarily told KMeans to create 5 clusters.  But who knows, maybe there are really only 3 clusters.  There are quantitative methods for identifying 'the right' number of clusters, but they aren't available in Domo out of the box.

Domo embraces all Analytic Tools

Let's do it again.  In R.
​

As demonstrated, Domo has user-friendly data science functionality suitable for the novice data scientist; however, in real-world applications, analysts will likely use Domo's data science functions to build up and validate proof of concepts before transitioning to a 'proper' data science development platform to deliver analytics.

Domo has both Python and R packages that facilitate the easy data manipulation in full-fledged data analytics environments.

To extract data from Domo into R:
  1. set up an access token & install the DomoR package
  2. find the dataset ID: https://<yourDomoInstance>.domo.com/datasources/0afe1c21-18f3-496a-acb5-599e07c9e33c/details/overview

The R Code:  

#install and load packages required to extract data from Domo
install.packages('devtools', dependencies = TRUE)
library(devtools)
install_github(repo='domoinc-r/DomoR')
library(DomoR)

#initialize connection to Domo Instance
domo_instance <- ' yourDomoInstance '
your_API_key <- ' yourAPIKey '
DomoR::init(domo_instance, your_API_key)

#extract data from Domo into R
datasetID <- 'yourDataSetID '
raw_df<- DomoR::fetch(datasetID)

## PROFIT!! ##
Picture
Picture
Earlier we asked the question, was 5 'the right' number of clusters. 

Given the variables from before (number of rooms, number of domestic reviews, etc.), once NULL values have been removed and the variables scaled and centerered, it appears that 6 clusters may have been a better choice).

Side bar: READ THE MANUAL!  Unless you read the detailed documentation, it's unclear how Domo's KMeans handles NULL values or whether any data pre-processing takes place.  This can have a significant impact on results.

​With the revelation that we should have used 6 KMeans clusters, we can either adjust our Magic ETL dataflow in Domo, or we can use R to create / replace a new dataset in Domo!

DomoR::create(  cluster_df, "R_hotel KMeans")
DomoR::replace_ds('datasetID', cluster_df)

In Domo's Magic ETL, we'll bind both cluster results to the original dataset for comparison.
  1. Add the R generated dataset (R_hotel_KMeans) to Magic ETL
  2. JOIN the datasets on the hotel_code
  3. SELECT columns to keep (remove any duplicate or unwanted columns)
  4. Revel in the glory!
Picture

Wrap it up

NEXT STEPS:  Create 'Cluster Personas.'  
Typically, organizations would use clusters of attributes to define 'hotel personas'. 
  1. 'Budget Business Hotel'
  2. 'Luxury Business Hotel'
  3. 'Backpackers Delight'
  4. 'Destination for Honeymooners'
  5. 'Weekend Treat'
  6. 'Long-Term Stay'
Because the attributes of 'Budget Business Hotel' and 'Backpacker's Delight' are similar, with 5 clusters, maybe they get combined into 'Budget Hotel'.  Alternatively, perhaps 'Weekend Treat' combines with 'Destination for Honeymooners' into a 'Luxury Short Stay' persona.

From there, the outcome of this clustering and persona generation may influence discount packages or marketing campaigns to appeal to specific travelers.

REMEMBER:  Clustering is not intended to predict Ratings!  If you're still stuck on that, review the purpose of clustering.  
Picture
In this quick article, I give the most cursory of overviews of how Domo's native (and beta) features can enable data science workflows.  We also explored how other analytic workflows can integrate into the Domo offering.

If you're interested in finding out more, I can connect you with a Domo Sales representative (not me), or I'd love to talk to you about your analytic workflows and use case to see if Domo's platform might be a good match for your organization!

Disclaimer: the views, thoughts, and opinions expressed in this post belong solely to the author and do not necessarily reflect the views of the author's employer, Domo Inc.
0 Comments

Data Warehouse and Cubes Tips and Tricks - March 2017

19/3/2017

0 Comments

 
Picture
Solutions:  
  • Set a default budget for Budget Amount
  • Invert the sign of Income Statement Accounts.

Why this blog series exists:
When you buy an out of the box data warehouse & cube solution like Jet Enterprise from Jet Reports, www.jetreports.com or ZapBI, it usually meets the 80/20 rule (the stock solution satisfies 80% of your reporting requirements, and you'll need further customizations to meet your remaining analytics requirements).  
​Client requests from customers like you inspire this blog series.

Set a Default Value for Budget Amount

Problem:  Some measures require filtering by a default value to show reasonable results.  

Consider the example of the company that creates and revises a new budget each quarter.  While it's perfectly reasonable to show the [Budget Amount] for March 2017 from the 'Initial 2017 Budget' or 'Q1 2017 Amended', it would be misleading to show the sum of both budgets for March 2017.  Similarly, if your organization uses a forecasting tool to calculate expected item consumption or sales, you may have multiple forecasts numbers for the same item and month. 

Solution:  To minimize the risk of 'double counting', we'll modify [Budget Amount] to always filter by a default [Budget Name].
In this solution, we hard-coded a budget into the Default Member as part of the dimension definition.  For improved maintenance, we could add an [IsDefaultBudget] attribute to the [Budget] dimension in our cube and data warehouse.  Then reference the new column when defining the Default Member.  Ideally our source ERP system can identify the current budget, or we can implement business processes and SQL code to auto-identify the correct budget.

Note:  Because we can have multiple concurrent budgets, but shouldn't sum [Budget Amount] across different budgets, the measure is a semi-additive fact -- in Business Intelligence nomenclature semi-additive facts cannot be summed across all dimensions.  Other common examples include balance, profit percent, average unit price, company consolidation, as well as most cost-allocation schemes.  When customizing our data warehouses and cubes, we must be careful where we implement the calculation to avoid reporting errors.

Non-additive facts cannot be summed (ex. [First Invoice Date]).  Fully-additive facts are 'normal' facts which can be summed across all dimensions.

Invert the sign of Income Statement Accounts

Problem:  Sometimes the dimension used affects how we want to calculate a measure.

In many ERP systems the [Amount] or [Quantity] column have counter-intuitive signs.  Consider G/L Entries which have negative signs for revenue and positive signs for expenses or inventory transactions which show a negative sign for item sales and shipments and positive signs for item purchases.  While the data makes sense to accountants and analysts, business users frequently appreciate seeing signs that intuitively make sense.

In cube-based reports, we'd like to invert the sign on [Amount] for all Income Statement accounts but not Balance Sheet accounts.  Note, it's arguably 'wrong' to invert the sign in the data warehouse because that would make our [Amount] column semi-additive, and potentially cause problems for our auditors!

Solution:  In the cube, use a SCOPE statements to assign a different calculation to [Amount] based on the [GL Account] dimension.
 Note:  in SSAS cube nomenclature we typically deal with two out of three measure types:  Standard and Calculated (avoid Derived measures when possible).

A Standard measure is typically a SUM or COUNT of a column that exists on the fact table in the data warehouse:  ex. sum(Amount) or sum(Quantity).

A Calculated measure is an expression that typically uses standard measures in conjunction with MDX expressions and functions.  Typical examples include division (profit margin percent or avg. daily sales) as well as all business functions (Year To Date, Balance, or  Previous Period). 

A Derived measure is a calculation based on fields from the same fact table -- ex. Profit Margin ([Sales Amount] - [Cost Amount])   When possible, avoid derived measures and use standard measures instead-- just add the calculation as a new column in the data warehouse.  Using standard instead of derived measures has two major advantages:  
  1. Users reporting directly against the data warehouse can use the predefined column.
  2. The cubes will process faster during their overnight load.

That's All Folks

Remember, out of the box solutions by definition are designed to work for ALL customers.  Further customization is expected, even required!  Onyx Reporting is available for development services, training and mentoring as you implement minor (if not major) customizations to your Jet Enterprise implementation.

For clients not using Jet Enterprise, please contact us to find out how data warehouse automation can cut your business intelligence implementation time from weeks and months into days.
0 Comments

SQL - What Version of NAV is it?

15/12/2016

0 Comments

 
If you're anything like me, you've got multiple copies of NAV stored on your SQL server from various clients and if you forget to write down which version of NAV they're on, that can be a pain.  

Fortunately, table structure doesn't change THAAAAAT much ;) but just in case:

SELECT 
    [NAV Database Version] = CASE 
        WHEN [databaseversionno] = 40
            THEN 'Navision 4 SP2'

Read More
0 Comments

12 Questions for Dashboard Design (and 5 Mistakes to Avoid)

14/12/2016

0 Comments

 
Last week we explored 6 dashboard design tips to improve the aesthetics of existing dashboards.  In this post, we share design tips from our analysts about dashboard and OLAP cube requirements gathering as well as costly mistakes to avoid.
Picture

12 Tips to Remember

Tailor your dashboard to the audience AND the process they're trying to optimize.
  • How important is this measure to a user compared to other measures?
  • What other measures would provide comparative context?
  • Is there a sequence of measures a user would follow to drill into a question?
  • How might the answer to the previous questions differ by users at different levels of the organizational hierarchy?
Express the data in a way relevant to the audience.
  • What's the scope of your dashboard? (Company, department, individuals, a supplier)
  • What level of summarization or detail is appropriate?
  • What unit of measure is approrpiate for each measure?
  • What complementary information should I include?
  • Again, how might the answers differ as scope or user changes?
Match the tool to the user. 
  • Managers and executives may prefer a dashboard that provides a non-interactive executive or strategic summary, while business or data analysts may prefer to construct their own analyses against raw data straight from the data warehouse, while mid-level managers may require a live operational report.
  • Did you choose the right chart?
  • Should this be a PowerBI dashboard, a SQL query, an SSRS report, a Jet Professional Report, an Excel PivotTable?
  • How often does the data need to be refreshed?  Do operational decisions depend on activity or status changes in the last 15 minutes?  Last 3 hours?  Last 24 hours?

5 Tips to Avoid

Kicking off a project with an overly complex problem quickly  leads to project stall or paralysis.
  • Complexity can arise when a variety of inputs are required to calculate a measure (eg. 'true profitability or cost allocation). A bottom-up approach of first identifying simple measures which aggregate into increasingly complex summarizations can encourage quick wins and maintain momentum.
  • If your team insists on a top-down approach, divide and conquer. Decompose complex strategic objectives goals, or measures into composite pieces.
  In your data warehouse, cubes and dashboards, avoid using metrics or abbreviations who's meanings are not immediately obvious.
  • The use of generic names like "Global Dimension 2", abbreviations or entity codes may affect adoption rates and user acceptance because they only resonate with a handful of seasoned analysts and fall flat with a broader audience.
  • In your visualizaitons, stave-off confusion or frustration by opting for verbose titles, captions, and axis labels while avoiding legends.
KISS (Keep it simple).  Avoid clutter, non-succinct graphics, or unintelligible widgets. 
  • 3D effects, sparklines, temperature gauge charts, mapping charts, and pie charts are common visualization elements that add spice and sizzle to a dashboard; however, when poorly executed, these elements take up significant space without communicating a lot of information.
  • Ask yourself if you could express the same information with less ink, color or space.
Avoid failing to match metrics with goals.
  • "How does this dashboard aid a business process?" It's not enough to showcase the activities of a department.  Keep strategic goals or operational activities in mind. Does the dashboard provide enough context for viewers to reach an actionable conclusion?
Don't wait to get started.
  • Many are tempted to wait until development has finished before they begin laying out their dashboards, when in truth, report requirements should drive or at least inform development efforts.

Go Forth and do Great Things

Still looking for inspiration?
  • Here are some of my favorite Jet Reports from the Report Player
  • Can't create the chart you want with a PivotTable?  Use Excel's Cube Functions 
  • Need dimensions or measures added to your cubes?  Our services team has domain expertise on data warehouse and OLAP cube design for Dynamics NAV and GP.
  • Make sure to sign up for our weekly newsletter for special promotions and articles delivered straight to your inbox.  
0 Comments

Best Practices:  Working with Multiple Environments

29/11/2016

0 Comments

 
Picture
What exactly are "multiple environments?" It's an infrastructure that allows your business to separate BI development endeavors from the live environment the organization uses for reporting and analytics.

"Why would I need it?" Development is not always a quick 5 or even 15-minute fix. Larger projects can take days, weeks, even months to complete, so you need a sandbox for the developers to 'play' in while the rest of the organization continues forward with 100% uptime on their live reporting environment. Some organizations may even choose to separate the Dev sandbox from QA (Quality Assurance) efforts, so a third Test environment may be needed!
But "do I need multiple environments?" As with any business question, 'it depends'. If your development is limited to just adding a field here or a new measure there during the initial implementation of your business intelligence project, you may be able to wiggle by without separated environments.

It may make sense to separate development from the live environment if:
  • your organization has many customizations in the ERP (which won't be reflected in an off-the-shelf implementation)
  • you're tackling questions unanswerable in the off-the-shelf BI environment you purchased
  • your organization is actively using and reliant on the reporting environment
​​

It may make sense to implement a QA environment if your organization has
  • formal processes for auditing data quality
  • legal obligations to report 100% accurate data to external bodies (ex. financial auditors, government bodies)
  • low tolerance for seeing 'weird data' in their live environment

Ready to get started today?
  • Review your current implementation with our  4hr - Scoping & Assessment Workshop
  • Put your BI development team through our DWA Master's Course

This presentation is preparation for a training series Onyx Reporting will be conducting in 2017! Join our mailing list to keep up with special offers, training opportunities and our blog series.
0 Comments
<<Previous
    Profile Picture Jae Wilson
    View my profile on LinkedIn

    Stay Informed.

    * indicates required

    RSS Feed

    Categories

    All
    Automation
    Basic Training Series
    Business Intelligence
    Connect
    Dashboard
    Data Pipeline
    Data Science
    Domo
    Excel Tricks
    Executive Training & Leadership
    Extract
    Jet Enterprise
    Jet Essentials
    New Release
    NP Function
    Onyx Reporting
    Planning
    Power Pivot
    Python
    Report Writing
    Statistics And Analytics
    TimeXtender
    Visualization

London, UK
jae@OnyxReporting.com
+44 747.426.1224
Jet Reports Certified Trainer Logo
  • Welcome
  • Services
  • Blog
  • YouTube Channel
  • Contact
  • Domo IDEA Exchange
    • Schedule
    • Call for Presenters