As part of my "Data Science Demystified" series, I had the opportunity to interview Data Science consultant and Major Domo practitioner John "Doc" Stevens from Crystal Ballers about the Domo Data Science Toolkit.
These tools include PyDomo & DomoR libraries for local development, Magic ETL tiles, and a Domo Juypter Notebook integration. Because each tool has distinct strengths and weaknesses, they fit into different parts of your data pipeline and data science project roll out.
In this video, we review each feature as well as make recommendations on how and when to use each tool as part of your pipeline.
Need to parse a JSON string in a MySQL dataflow? See the tutorial below.
Doesn't MySQL support JSON specific transforms?
Yes, however, Domo's MySQL 5.6 environment predates JSON parsing support which was introduced in MySQL 5.7 and expanded in MySQL8.0+.
Are there better ways to handle JSON parsing in Domo?
Domo's ETL and visualziation engine require data structured in a relational format (one value per field). Users can use custom connectors, Python scripting or MagicETL to parse large string blobs, which should scale better than parsing the same data in SQL transforms.
Can I do this as a stored procedure?
Yes, see the Domo KB Article.
At scale, stored procedures can be tuned to outperform SELECT ... INTO table; however, Onyx Reporting recommends the table approach during the initial implementation because the code may be easier to parse and troubleshoot than Dynamic SQL.
New data initatives and BI projects are a fickle thing. You only get one shot at making a good first impression with end-users and senior stakeholders, and the last thing you want them saying is, "I don't trust these numbers."
If you have stake in project adoption but most of the checkpoints read like technical jargon, give me a call. I'd be happy to sit with your developer team and co-review your data pipeline.