|A file associated with the execution of an ML pipeline. Often an artifact is some data generated during the training of a model that you want to have saved after the pipeline finishes executing (e.g. charts, metrics, or logs). An artifact can also be generated during model inference (e.g. media transcripts, or model predictions).
|A machine learning task where you attempt to predict some discrete value (e.g. "happy" vs "sad").
|Credits represent dollars spent on the Hume Platform. Credits are what are consumed when you run jobs on the Platform. See our pricing for more details.
|A collection of media files and features saved in the Registry under a dataset name. A Dataset can be used to train a custom model.
|Our proprietary data collection process aggregates millions of samples from all around the world into large and diverse expression datasets. Information on our datasets can be found here.
|An embedding is a list of numbers used to represent some data numerically. It is a distilled (compressed) representation which preserves important information while removing as much unimportant information (noise) as possible. Embeddings are often visualized as points in a scatter plot to compare similarity with other embeddings.
|A feature of a dataset is one column that can be used to train a custom model. The model uses features you provide in order to learn some intelligent mapping between raw media files and some target label or quantity that you want to predict.
|Inference is the action of making predictions with a model. A close synonym would be "prediction". Similarly, the result of model prediction can be called “an inference”. Less commonly the term is used as a verb, “to inference”.
|A job is Hume's concept for some work to be done by the Platform. A job is the unit of computation used by Hume APIs to track usage. A batch inference job produces predictions for a collection of files. A streaming inference job produces predictions for the lifetime of a WebSocket connection. A training job is used to train a custom model based on a custom dataset. An custom model inference job applies a trained custom model to new files.
|You can train your own model on top of our powerful embedding models to predict a target label or quantity that correlates to the expressive content of your media files.
|Our ML models, built from our proprietary datasets, provide vocal, facial, and text analysis. Using our model inference APIs you can leverage these models to generate expression embeddings
|Hume's suite of tools built to empower you to analyze a wide variety expressions.
|A web page within the Portal where you can explore the predictions of Hume Models and Custom Models to get a better intuitive grasp of how you might best leverage the Hume Platform. You can try out our models with example files, your own files, or just try the models live using your webcam or a built in text editor.
|The hub for the Hume Platform on the web. The Portal contains information about your API key, credit usage, payment details, the Playground, and your workspace of custom files, datasets, and models.
|Predictions are results produced by Hume Models and Custom Models. They include expression outputs, but also associated data like face bounding boxes and timestamps.
|The Hume Registry is the storage repository for users’ files, datasets, models, embeddings, predictions, and jobs.
|A machine learning task where you attempt to predict some continuous value (e.g. joy on a scale from 1-10)
|Training is the process of improving a model’s ability to predict a target feature given an input dataset.
Updated 3 months ago