πŸš€ Reproducible Experimentation Framework

Run Experiments Anywhere
With Minimal Effort

Kiso helps researchers run and reproduce experiments across edge, cloud, and testbed environments, like FABRIC, Chameleon, etc. Define your experiments declaratively, and let Kiso handle the infrastructure complexity.

Kiso is designed for
  • Running workflow experiments across testbeds with builtin support for Pegasus.
  • Running agentic experiments.

What does Kiso do?

Kiso Experiment Big Picture

Kiso in action

# Install Kiso
$ pip install kiso[all]
# Define your experiment in YAML
$ vim experiment.yml
# Provision resources across multiple testbeds
$ kiso up
# Run across multiple testbeds
$ kiso run
# Deprovision resources
$ kiso down

Focus on Research, Not Infrastructure

Without Kiso

You write custom scripts to provision resources, install software, and manage (start, wait, stop) execution instead of focusing on your experiment.

Need to learn testbed-specific APIs to do the above.

With Kiso

You focus on designing and evaluating your experiments rather than managing infrastructure

Kiso handles resource provisioning, software setup, experiment execution, and result collection

Kiso supports FABRIC, Chameleon, and Vagrant testbeds, and can run Shell and Pegasus workflow experiments.

Complete Experiment Lifecycle

Kiso manages every stage of your experiment with YAML-based configurationβ€”no custom orchestration code required

1

πŸ—οΈ Provision Resources

Declaratively provision computing resources across one or more supported testbeds. Kiso handles authentication, allocation, and configuration across diverse infrastructure providers.

2

βš™οΈ Install & Configure

Automatically install and configure software stacks, workload management systems, and execution environments. Deploy containers, workflow engines, agent runtimes, and custom dependencies.

3

πŸ”¬ Execute & Collect

Run experiments in a controlled, repeatable manner across distributed infrastructure. Automatically collect results from all resources back to a central location for analysis.

Monitoring Result Collection
FABRIC Testbed Logo

πŸš€ Starter Example: A basic hello world example for FABRIC testbed

View on GitHub

πŸ€– Pydantic Agent: A template for running AI agent workloads on provisioned infrastructure: spins up Ollama with a local LLM and a Pydantic agent that returns structured output.

View on GitHub

πŸ”¬ COLMENA: Hyper-distributed agent swarms across heterogeneous edge devices, coordinated as a unified computing platform spanning the full compute continuum.

View on GitHub
Chameleon Cloud Logo

πŸš€ Starter Example: A basic hello world example for Chameleon testbed

View on GitHub

🦠 Plankifier: Deep learning classification of freshwater plankton images to automate ecosystem monitoring at scale, replacing costly manual microscopic annotation.

View on GitHub
Vagrant Logo

πŸš€ Starter Example: A basic hello world example for Vagrant

View on GitHub

πŸ‹ Orcasound: Real-time orca detection using hydrophone sensors across Washington state, training on live audio to identify whale calls with automated ML pipelines.

View on GitHub

Built for Reproducibility

Everything you need to run reliable, reproducible experiments across distributed environments

πŸ“‹

Declarative Config

YAML-based experiment specifications capture what should run, where, and howβ€”ensuring complete reproducibility without custom code.

πŸ”„

Multi-Environment

Seamlessly run experiments across edge, and testbed environments through a unified interface. Support for major research testbed providers.

⚑

Zero Custom Scripts

No more writing and maintaining custom orchestration code. Define everything in YAML and let Kiso handle the complexity.

πŸ”Œ

Extensible

Plugin architecture supports custom software installers, and experiment orchestrators to fit your specific needs.

πŸ›‘οΈ

Reliable

Robust error handling and automated resource cleanup ensure your experiments run smoothly and don't leave resources stranded.

πŸ“Š

Centralized Results

Automatically collect experiment results from all distributed resources to a central location for easy analysis and sharing.

Example Experiments

Real-world research experiments powered by Kiso

πŸ€– Pydantic Agent

A template for running AI agent workloads on provisioned infrastructure: spins up Ollama with a local LLM and a Pydantic agent that returns structured output.

🦠 Plankifier

Deep learning classification of freshwater plankton images to automate ecosystem monitoring at scale, replacing costly manual microscopic annotation.

πŸ‹ Orcasound

Real-time orca detection using hydrophone sensors across Washington state, training on live audio to identify whale calls with automated ML pipelines.

πŸ”¬ COLMENA

Hyper-distributed agent swarms across heterogeneous edge devices, coordinated as a unified computing platform spanning the full compute continuum.

Start Your Next Experiment

Join researchers using Kiso to simplify their edge-to-cloud experimentation workflows

Starter Template β†’ Read Documentation View on GitHub