No-code ML allows users to make predictions without writing a single block of code. A person with absolutely no technical background or programming experience can leverage the power of machine learning and Artificial Intelligence.
Traditional ML vs the No-Code ML Process
The Traditional Machine Learning Process
When searching for how to approach the machine learning process you may come across this post (or a post like it) and get close to ten steps on how to collect data, build a model, train it, improve it, etc.
Typically the traditional model looks like this:
The No-Code Machine Learning Process
No-code ML simplifies this process into this:
This is a much simpler route for someone looking to make data predictions without taking the time to master technical machine learning skills. With drag and drop data predictions, you can simply edit your queries by replacing identifier and prediction columns and setting aside columns you don't want to use. This allows you to quickly predict metrics like churn, LTV, the tenure of a contract etc.
Use Cases With No-Code Data Predictions
We've even compared Google ML's "No-Code" process to our own and avoided wasting time with technical documentation and training algorithms.
As we produce more and more use cases, the point we want to make is whatever you want to predict, you can—as long as you have the data.
How No-Code Algorithms Work
We have dedicated a post to this, but to reiterate, our no-code algorithms aren't just "out of the box" algorithms, they are custom built and trained based on your prediction.
Here's a rundown on how no-code algorithms speed up the traditional ML process from the moment you start predicting in Obviously AI.
1. Preprocessing/Feature Engineering/Normalization
Once you press “Go” to make a prediction, Obviously AI begins the preprocessing phase where it essentially turns raw data into inputs the machine learning algorithm can understand. It removes rows or columns with empty/null values, feature columns with too many unique non-numeric values, upsamples and downsamples the data, and finally runs several other processes to make your data machine learning ready. This is also called Feature Engineering and is a popular ML term to improve ML model accuracy.
Obviously AI also performs normalization where it changes the values of the numerical columns to get more accurate ranges. Not every dataset requires normalization, but it is mainly used to improve accuracy when there are two very different ranges. Say, for example, there’s a column of Age and a column of Salary. These columns will have two very different ranges. Age will primarily be number 0 to 100 and Salary could be anywhere between $40K to $1M. We don’t want the column with a larger range to influence the smaller range and make the prediction inaccurate, so we normalize the data and put it on a similar scale to Age.
2. Training Models
This is where machine learning gets technical.
Think of building an algorithm the same way you would try to make music on a synthesizer. When you take a synthesizer out of the box, there are pre-loaded settings inside of it. The same is true for algorithms. There are basic algorithms that are essentially a blank canvas. Each algorithm has different settings. Think of these settings as knobs or buttons on a synth—the same way you would Attack, Release, Decay, etc. A professional musician could take these pre-set sounds and find the most fitting one they want for a track pretty quickly, compared to a beginner. Think of Obviously AI as this professional musician that takes a pre-set algorithm and tries out thousands of permutations based on the dataset’s properties and finds the right combinations on the fly for optimized accuracy. This is music to a non-technical business user’s ears because they might be a ML beginner and it would take a while to build the most accurate algorithm.
All the user has to do is enter a query and press “Go.”
3. Testing for Accuracy
On top of the previously mentioned processes Obviously AI performs for accuracy, we also take an extra step to improve the accuracy of your Prediction Report.
While Obviously AI is testing your dataset, it sets aside a section of rows to test separately for consistency. For example, out of a 1,000-row dataset, it separates 100 rows and tests them for the same accuracy as the rest of your dataset. This ensures that the algorithm is accurate for all of your data—even out of the context of the 1,000 rows.
AND the crazy thing about these 3 steps is this all happens in 30 seconds or less.
As of now, we use classification and regression algorithms and we are working hard to expand the capabilities of our platform.
Check out our demo video to get a better understanding of how our platform works.