GPy.examples package¶
Introduction¶
The examples in this package usually depend on pods so make sure you have that installed before running examples. The easiest way to do this is to run pip install pods. pods enables access to 3rd party data required for most of the examples.
The examples are executable and self-contained workflows in that they have their own source data, create their own models, kernels and other objects as needed, execute optimisation as required, and display output.
Viewing the source code of each model will clarify the steps taken in its execution, and may provide inspiration for developing of user-specific applications of GPy.
Submodules¶
GPy.examples.classification module¶
Gaussian Processes classification examples
-
crescent_data
(model_type='Full', num_inducing=10, seed=10000, kernel=None, optimize=True, plot=True)[source]¶ Run a Gaussian process classification on the crescent data. The demonstration calls the basic GP classification model and uses EP to approximate the likelihood.
Parameters: - model_type – type of model to fit [‘Full’, ‘FITC’, ‘DTC’].
- inducing (int) – number of inducing variables (only used for ‘FITC’ or ‘DTC’).
- seed (int) – seed value for data generation.
- kernel (a GPy kernel) – kernel to use in the model
-
oil
(num_inducing=50, max_iters=100, kernel=None, optimize=True, plot=True)[source]¶ Run a Gaussian process classification on the three phase oil data. The demonstration calls the basic GP classification model and uses EP to approximate the likelihood.
-
sparse_toy_linear_1d_classification
(num_inducing=10, seed=10000, optimize=True, plot=True)[source]¶ Sparse 1D classification example
Parameters: seed (int) – seed value for data generation (default is 4).
-
sparse_toy_linear_1d_classification_uncertain_input
(num_inducing=10, seed=10000, optimize=True, plot=True)[source]¶ Sparse 1D classification example
Parameters: seed (int) – seed value for data generation (default is 4).
-
toy_heaviside
(seed=10000, max_iters=100, optimize=True, plot=True)[source]¶ Simple 1D classification example using a heavy side gp transformation
Parameters: seed (int) – seed value for data generation (default is 4).
GPy.examples.dimensionality_reduction module¶
-
bgplvm_oil
(optimize=True, verbose=1, plot=True, N=200, Q=7, num_inducing=40, max_iters=1000, **k)[source]¶
-
bgplvm_simulation_missing_data
(optimize=True, verbose=1, plot=True, plot_sim=False, max_iters=20000.0, percent_missing=0.1, d=13)[source]¶
-
bgplvm_simulation_missing_data_stochastics
(optimize=True, verbose=1, plot=True, plot_sim=False, max_iters=20000.0, percent_missing=0.1, d=13, batchsize=2)[source]¶
-
bgplvm_test_model
(optimize=False, verbose=1, plot=False, output_dim=200, nan=False)[source]¶ model for testing purposes. Samples from a GP with rbf kernel and learns the samples with a new kernel. Normally not for optimization, just model cheking
-
cmu_mocap
(subject='35', motion=['01'], in_place=True, optimize=True, verbose=True, plot=True)[source]¶
-
sparse_gplvm_oil
(optimize=True, verbose=0, plot=True, N=100, Q=6, num_inducing=15, max_iters=50)[source]¶
-
ssgplvm_oil
(optimize=True, verbose=1, plot=True, N=200, Q=7, num_inducing=40, max_iters=1000, **k)[source]¶
-
ssgplvm_simulation
(optimize=True, verbose=1, plot=True, plot_sim=False, max_iters=20000.0, useGPU=False)[source]¶
GPy.examples.non_gaussian module¶
GPy.examples.regression module¶
Gaussian Processes regression examples
-
coregionalization_sparse
(optimize=True, plot=True)[source]¶ A simple demonstration of coregionalization on two sinusoidal functions using sparse approximations.
-
coregionalization_toy
(optimize=True, plot=True)[source]¶ A simple demonstration of coregionalization on two sinusoidal functions.
-
epomeo_gpx
(max_iters=200, optimize=True, plot=True)[source]¶ Perform Gaussian process regression on the latitude and longitude data from the Mount Epomeo runs. Requires gpxpy to be installed on your system to load in the data.
-
multiple_optima
(gene_number=937, resolution=80, model_restarts=10, seed=10000, max_iters=300, optimize=True, plot=True)[source]¶ Show an example of a multimodal error surface for Gaussian process regression. Gene 939 has bimodal behaviour where the noisy mode is higher.
-
olympic_100m_men
(optimize=True, plot=True)[source]¶ Run a standard Gaussian process regression on the Rogers and Girolami olympics data.
-
olympic_marathon_men
(optimize=True, plot=True)[source]¶ Run a standard Gaussian process regression on the Olympic marathon data.
-
parametric_mean_function
(max_iters=100, optimize=True, plot=True)[source]¶ A linear mean function with parameters that we’ll learn alongside the kernel
-
robot_wireless
(max_iters=100, kernel=None, optimize=True, plot=True)[source]¶ Predict the location of a robot given wirelss signal strength readings.
-
silhouette
(max_iters=100, optimize=True, plot=True)[source]¶ Predict the pose of a figure given a silhouette. This is a task from Agarwal and Triggs 2004 ICML paper.
-
simple_mean_function
(max_iters=100, optimize=True, plot=True)[source]¶ The simplest possible mean function. No parameters, just a simple Sinusoid.
-
sparse_GP_regression_1D
(num_samples=400, num_inducing=5, max_iters=100, optimize=True, plot=True, checkgrad=False)[source]¶ Run a 1D example of a sparse GP regression.
-
sparse_GP_regression_2D
(num_samples=400, num_inducing=50, max_iters=100, optimize=True, plot=True, nan=False)[source]¶ Run a 2D example of a sparse GP regression.
-
toy_ARD
(max_iters=1000, kernel_type='linear', num_samples=300, D=4, optimize=True, plot=True)[source]¶
-
toy_ARD_sparse
(max_iters=1000, kernel_type='linear', num_samples=300, D=4, optimize=True, plot=True)[source]¶
-
toy_poisson_rbf_1d_laplace
(optimize=True, plot=True)[source]¶ Run a simple demonstration of a standard Gaussian process fitting it to data sampled from an RBF covariance.
-
toy_rbf_1d
(optimize=True, plot=True)[source]¶ Run a simple demonstration of a standard Gaussian process fitting it to data sampled from an RBF covariance.
-
toy_rbf_1d_50
(optimize=True, plot=True)[source]¶ Run a simple demonstration of a standard Gaussian process fitting it to data sampled from an RBF covariance.