Single-objective optimization of the classic Rosenbrock problem

Before starting this tutorial we suggest you to read documentation#getstarted which will provide an example of data communication between your client code and Indie Solver.

You should have a registered account to communicate with the solver. If you are a student or a researcher, don't forget to contact us at contact@indiesolver.com after registering your account to upgrade it to the standard subscription plan which is free for academic researchers. It will allow you to deal with more challenging optimization problems.

The source code of this tutorial is available for download and includes two files: IndieSolver.java (our thin wrapper code to communicate with Indie Solver) and Rosenbrock10D.java (our code for this tutorial). You will need to include json-simple-1.1.1.jar, an old and simple JSON library.

In this tutorial, we will optimize the well-known Rosenbrock problem which is relatively simple and quite popular in different subfields of optimization:

\begin{aligned} & \text{minimize} \,\, f_{rosen}(x)= \sum^{n-1}_{i=1}100(x_{i+1} - x_i^2)^2 + (1-x_i)^2 \\[0.5em] & \text{subject to} \,\, x \in [-2, 2]^n \end{aligned}

with n parameters. Here, we set n to 10 making the problem moderately difficult to optimize. For n>3, the problem has several local optima/minima with the global optimum in [1, 1, ..., 1], where f_{rosen}([1, 1, ..., 1])=0. The region around [0, 0, ..., 0] is somewhat attractive for optimizers due to f_{rosen}([0, 0, ..., 0])=1. Often initially attracted by that region, optimizers then follow a narrow valley from [0, 0, ..., 0] to [1, 1, ..., 1], hence the nicknames Rosenbrock's valley and Rosenbrock's banana function.

The code below defines our "Rosenbrock 10-D" problem with 10 parameters where each parameter is a floating-point that changes between -2.0 an 2.0 with 0.0 selected as its default value. Then, we define a single metric called "obj1" to be minimized.

Candidate solutions will be evaluated in a function called "evaluate" which implements f_{rosen}(x). The result of this evaluation is stored in "obj1".


In order to communicate with IndieSolver, you should initialize a worker and tell it to connect to a particular IP address and port while providing your secret authentication token. All this information is available on your profile page. Then, you can create a problem according to your previously defined "problem" object. Note that every time you create a problem with some "problem_name", the server will rewrite all the information available for that "problem_name". Therefore, when you want to continue to optimize an already existing problem, you can ask its description by calling "ask_problem_description" and set it to your worker by calling "set_problem" as shown in the last two commented out lines given below:

The following JSON request describing the problem to be created will be sent to Indie Solver:

Your worker can ask solutions iteratively one after another by calling "ask_new_solutions". The reply of this call will include solutions to be evaluted. For the case when some errors were detected by the server, one should check for the status of the reply in order to avoid unnecessary and potentially expensive calls to the objective function. Once some solutions are evaluated, one can communicate them to the server by calling "tell_metrics".


The loop described above will perform, request and evaluate solutions 700 times. If you are currently on our free subscription plan, then only 200 function evaluations will be performed because of the limit to perform at most 20x the number of parameters function evaluations per problem.

The entire code of this tutorial example is given below:


You can store suggested solutions locally in a way you prefer. We provide a dashboard to access your problems and solutions. Find below an example of how it might look like for a newly created account. To inspect "Rosenbrock 10-D", click "view" in the corresponding row of the table.

After asking to view "Rosenbrock 10-D", the page will be updated. You can verify the correctness of your problem description:

Evolution of the objective function value over the number of function evaluations is given below. You can modify what will be shown on different axes. Importantly, you can apply different transformations to better visualize your data. For instance, here we apply \log_{10}() transformation for the left y-axis. You can click on solutions of interest to store them in a field located below the figure.

Evolution of parameter values over the number of function evaluations is given below. This figure often helps to better understand your problem and detect dependencies between parameters. In our particular case of f_{rosen}(x), we see how parameter values steadily move one after another (earlier indexes first) from 0.0 to 1.0 along the valley.

Parallel coordinates represent another way to demonstrate evolution of solutions. The figure below shows parallel coordinates for both metrics and parameters (note that you can select that option). There is not much to see here for f_{rosen}(x) because optimal parameters are all at 1.0. However, this representation might be very useful when your problem at hand has multiple local optima.

Finally, all evaluated solutions are stored in the table below.

In this introduction tutorial we showed how to setup a simple optimization problem and view its optimization results in the dashboard. You may need to make quite a few adjustments of the problem description to match your actual problem at hand. Have a look at our other tutorials and let us know at contact@indiesolver.com if you need some help setting up your experiment.