GPU-native convex optimization

Moreau

A compiled solver for conic programs.
Differentiable. Batched. PyTorch and JAX native.

import moreau

solver = moreau.Solver(P, q, A, b, cones=cones)
solution = solver.solve()

GPU-Native

Interior-point method on GPU via cuDSS. Up to 100× faster than CPU solvers on an H100.

Batched

Solve many instances in parallel. One compile, many solves.

Differentiable

Exact gradients through KKT conditions. Forward and backward pass on GPU. PyTorch and JAX native.

Up to 100× faster on H100

4–14×

faster

Power Systems

9–35×

faster

Model Predictive Control

25–99×

faster

Portfolio Optimization

vs. state-of-the-art CPU solvers.

Applications

Moreau is relevant wherever convex optimization appears in a loop.

Control

MPC and MHE for robotics, buildings, autonomous systems.

Finance

Portfolio optimization, risk management, trading.

Energy

Optimal power flow, dispatch, grid operations.

Optimization Layers

Embed constrained optimization inside neural network training loops.

Optimization Layers

A layer in your neural network that solves an optimization problem

An optimization layer maps inputs to the solution of a convex optimization problem. The layer is differentiable, so gradients flow through the solve and the entire network trains end-to-end.

This lets neural networks make decisions that respect hard constraints — physics, budgets, safety limits — by construction, not by hope.

Learn more →

Neural Network Layer

Neural Network Layer

Optimization Layer

minimize f(x; θ)

s.t. g(x; θ) ≤ 0

Loss

Team

Shane Barratt

Shane Barratt

CEO

Parth Nobel

Parth Nobel

CTO

Steven Diamond

Steven Diamond

COO

An optimization research lab. Creators of CVXPY and CVXPYlayers.

Get started

Free for academic use. Enterprise licensing for production deployments.

Questions? Get in touch or email info@optimalintellect.com