Notebooks as experiments#

The .ipynb format is capable of storing tables and charts in a standalone file. This makes it a great choice for model evaluation reports. NotebookCollection allows you to retrieve results from previously executed notebooks to compare them.

import urllib.request

from ploomber_engine import execute_notebook
import jupytext

from sklearn_evaluation import NotebookCollection

Let’s first generate a few notebooks, we have a train.py script that trains a single model, let’s convert it to a jupyter notebook:

# download script
url = "https://raw.githubusercontent.com/ploomber/sklearn-evaluation/master/doc-assets/nb-collection/train.py"  # noqa
urllib.request.urlretrieve(url, filename="train.py")

# convert
nb = jupytext.read("train.py")
jupytext.write(nb, "train.ipynb")

We use ploomber-engine to execute the notebook with different parameters, we’ll train 4 models: 2 random forest, a linear regression and a support vector regression:

# models with their corresponding parameters
params = [
    {"model": "sklearn.ensemble.RandomForestRegressor", "params": {"n_estimators": 50}},
    {
        "model": "sklearn.ensemble.RandomForestRegressor",
        "params": {"n_estimators": 100},
    },
    {"model": "sklearn.linear_model.LinearRegression", "params": {}},
    {"model": "sklearn.svm.LinearSVR", "params": {}},
]

# ids to identify each experiment
ids = [
    "random_forest_1",
    "random_forest_2",
    "linear_regression",
    "support_vector_regression",
]

# output files
files = [f"{i}.ipynb" for i in ids]

# execute notebooks using ploomber-engine
for f, p in zip(files, params):
    execute_notebook("train.ipynb", output_path=f, parameters=p, progress_bar=False)

To use NotebookCollection, we pass a a list of paths, and optionally, ids for each notebook (uses paths by default).

The only requirement is that cells whose output we want to extract must have tags, each tag then becomes a key in the notebook collection. For instructions on adding tags, see this.

Extracted tables add colors to certain cells to identify the best and worst metrics. By default, it assumes that metrics are errors (smaller is better). If you are using scores (larger is better), pass scores=True, if you have both, pass a list of scores:

nbs = NotebookCollection(paths=files, ids=ids, scores=["r2"])

To get a list of tags available:

list(nbs)
['model_name', 'feature_names', 'model_params', 'plot', 'metrics', 'houseage']

model_params contains a dictionary with model parameters, let’s get them (click on the tabs to switch):

# pro-tip: then typing the tag, press the "Tab" key for autocompletion!
nbs["model_params"]
{
    'bootstrap': True,
    'ccp_alpha': 0.0,
    'criterion': 'squared_error',
    'max_depth': None,
    'max_features': 1.0,
    'max_leaf_nodes': None,
    'max_samples': None,
    'min_impurity_decrease': 0.0,
    'min_samples_leaf': 1,
    'min_samples_split': 2,
    'min_weight_fraction_leaf': 0.0,
    'n_estimators': 50,
    'n_jobs': None,
    'oob_score': False,
    'random_state': None,
    'verbose': 0,
    'warm_start': False,
}
{
    'bootstrap': True,
    'ccp_alpha': 0.0,
    'criterion': 'squared_error',
    'max_depth': None,
    'max_features': 1.0,
    'max_leaf_nodes': None,
    'max_samples': None,
    'min_impurity_decrease': 0.0,
    'min_samples_leaf': 1,
    'min_samples_split': 2,
    'min_weight_fraction_leaf': 0.0,
    'n_estimators': 100,
    'n_jobs': None,
    'oob_score': False,
    'random_state': None,
    'verbose': 0,
    'warm_start': False,
}
{
    'copy_X': True,
    'fit_intercept': True,
    'n_jobs': None,
    'positive': False,
}
{
    'C': 1.0,
    'dual': True,
    'epsilon': 0.0,
    'fit_intercept': True,
    'intercept_scaling': 1.0,
    'loss': 'epsilon_insensitive',
    'max_iter': 1000,
    'random_state': None,
    'tol': 0.0001,
    'verbose': 0,
}

plot has a y_true vs y_pred chart:

nbs["plot"]

On each notebook, metrics outputs a data frame with a single row with mean absolute error (mae) and mean squared error (mse) as columns.

For single-row tables, a “Compare” tab shows all results at once:

nbs["metrics"]
  random_forest_1 random_forest_2 linear_regression support_vector_regression
mae 0.338954 0.335678 0.529571 0.634219
mse 0.266278 0.259645 0.536969 0.842978
r2 0.800180 0.805158 0.597049 0.367415
mae mse r2
0 0.338954 0.266278 0.80018
mae mse r2
0 0.335678 0.259645 0.805158
mae mse r2
0 0.529571 0.536969 0.597049
mae mse r2
0 0.634219 0.842978 0.367415

We can see that the second random forest is performing the best in both metrics.

houseage contains a multi-row table where with error metrics broken down by the HouseAge indicator feature. Multi-row tables do not display the “Compare” tab:

nbs["houseage"]
Hide code cell output
mae mse r2
HouseAge
1.0 0.837960 0.702177 NaN
2.0 0.603115 0.670724 0.618309
3.0 0.468688 0.364688 0.440429
4.0 0.421817 0.378879 0.637909
5.0 0.377598 0.301233 0.610541
6.0 0.432726 0.731151 0.338277
7.0 0.368733 0.274706 0.329934
8.0 0.356222 0.244719 0.809457
9.0 0.288975 0.209516 0.737485
10.0 0.364628 0.295919 0.697792
11.0 0.376185 0.307549 0.673065
12.0 0.358564 0.276587 0.687196
13.0 0.350239 0.300623 0.705605
14.0 0.300519 0.177565 0.765378
15.0 0.352025 0.283976 0.730159
16.0 0.321841 0.190594 0.762655
17.0 0.351795 0.287565 0.693935
18.0 0.330103 0.232023 0.787512
19.0 0.327004 0.221794 0.798871
20.0 0.351433 0.285906 0.756704
21.0 0.367176 0.303935 0.774654
22.0 0.392445 0.390616 0.751614
23.0 0.292767 0.173997 0.880010
24.0 0.379270 0.295199 0.719500
25.0 0.376928 0.293372 0.786681
26.0 0.333518 0.224102 0.834041
27.0 0.353817 0.268162 0.800468
28.0 0.341323 0.307461 0.828202
29.0 0.294239 0.188852 0.847658
30.0 0.324308 0.273429 0.793268
31.0 0.330396 0.261934 0.807779
32.0 0.297834 0.238183 0.840184
33.0 0.342377 0.282937 0.768155
34.0 0.304721 0.266000 0.827936
35.0 0.288745 0.201500 0.853619
36.0 0.252276 0.137117 0.871008
37.0 0.319271 0.206182 0.853125
38.0 0.292509 0.182636 0.884272
39.0 0.362820 0.400139 0.760198
40.0 0.337796 0.285386 0.839105
41.0 0.287775 0.153817 0.899147
42.0 0.279633 0.201136 0.821031
43.0 0.324803 0.223944 0.817105
44.0 0.321689 0.206051 0.845169
45.0 0.290420 0.149500 0.870415
46.0 0.325370 0.272864 0.738759
47.0 0.287781 0.164253 0.859718
48.0 0.403556 0.353920 0.808319
49.0 0.389213 0.326508 0.827351
50.0 0.364305 0.286180 0.859673
51.0 0.236668 0.087948 0.952861
52.0 0.453691 0.461815 0.751661
mae mse r2
HouseAge
1.0 1.025040 1.050708 NaN
2.0 0.599480 0.640647 0.635425
3.0 0.491735 0.352691 0.458837
4.0 0.422475 0.375968 0.640691
5.0 0.399153 0.323178 0.582169
6.0 0.443223 0.727583 0.341506
7.0 0.345617 0.252422 0.384289
8.0 0.339093 0.220331 0.828446
9.0 0.299885 0.210356 0.736432
10.0 0.358371 0.281947 0.712062
11.0 0.379343 0.315918 0.664169
12.0 0.343266 0.247947 0.719586
13.0 0.358444 0.292065 0.713986
14.0 0.295055 0.162326 0.785514
15.0 0.345515 0.263934 0.749203
16.0 0.311929 0.180310 0.775462
17.0 0.350213 0.284582 0.697111
18.0 0.327515 0.230986 0.788461
19.0 0.340109 0.222722 0.798030
20.0 0.350719 0.264215 0.775162
21.0 0.362950 0.298327 0.778812
22.0 0.379060 0.353989 0.774905
23.0 0.297731 0.185384 0.872158
24.0 0.379329 0.294101 0.720544
25.0 0.367293 0.278624 0.797404
26.0 0.328050 0.222476 0.835244
27.0 0.346733 0.251072 0.813184
28.0 0.335664 0.303719 0.830293
29.0 0.289874 0.195881 0.841988
30.0 0.316018 0.272230 0.794175
31.0 0.317401 0.247245 0.818559
32.0 0.307448 0.244857 0.835706
33.0 0.331701 0.275161 0.774527
34.0 0.310319 0.273442 0.823123
35.0 0.287749 0.200531 0.854322
36.0 0.253023 0.139995 0.868301
37.0 0.304725 0.188430 0.865770
38.0 0.288737 0.179938 0.885981
39.0 0.346665 0.366419 0.780406
40.0 0.318467 0.264754 0.850738
41.0 0.285991 0.152549 0.899979
42.0 0.274300 0.195553 0.825998
43.0 0.321044 0.208990 0.829318
44.0 0.317549 0.207013 0.844447
45.0 0.274411 0.140583 0.878144
46.0 0.322116 0.268414 0.743020
47.0 0.286032 0.148251 0.873385
48.0 0.395226 0.348147 0.811446
49.0 0.374620 0.271707 0.856329
50.0 0.392728 0.336579 0.834961
51.0 0.262487 0.106165 0.943098
52.0 0.453416 0.454718 0.755477
mae mse r2
HouseAge
1.0 0.077045 0.005936 NaN
2.0 0.621786 0.704731 0.598956
3.0 0.394869 0.309883 0.524520
4.0 0.502120 0.526965 0.496384
5.0 0.402867 0.349914 0.547602
6.0 0.535902 0.955226 0.135479
7.0 0.471769 0.422926 -0.031604
8.0 0.435266 0.300705 0.765865
9.0 0.395658 0.334463 0.580930
10.0 0.548596 0.445740 0.544787
11.0 0.497246 0.449512 0.522155
12.0 0.436461 0.347855 0.606596
13.0 0.432717 0.332560 0.674330
14.0 0.412923 0.298274 0.605881
15.0 0.454044 0.359019 0.658850
16.0 0.451385 0.339579 0.577125
17.0 0.450863 0.388885 0.586097
18.0 0.422341 0.361656 0.668794
19.0 0.440418 0.385330 0.650572
20.0 0.480448 0.449165 0.617777
21.0 0.481173 0.437860 0.675358
22.0 0.557506 0.622022 0.604467
23.0 0.437108 0.361490 0.750713
24.0 0.480626 0.422856 0.598201
25.0 0.551525 0.541321 0.606389
26.0 0.507414 0.484308 0.641343
27.0 0.522484 0.532184 0.604017
28.0 0.574381 0.744957 0.583745
29.0 0.484700 0.541898 0.562865
30.0 0.538348 0.534287 0.596041
31.0 0.538239 0.521302 0.617441
32.0 0.515134 0.528955 0.645083
33.0 0.536822 0.510495 0.581690
34.0 0.608867 1.018926 0.340902
35.0 0.571631 0.534728 0.611543
36.0 0.503451 0.421068 0.603886
37.0 0.537139 0.475265 0.661442
38.0 0.567287 0.519219 0.670993
39.0 0.601552 0.741179 0.555814
40.0 0.610118 0.653960 0.631311
41.0 0.581788 0.574146 0.623550
42.0 0.503874 0.438613 0.609726
43.0 0.565345 0.495386 0.595420
44.0 0.553312 0.508499 0.617904
45.0 0.547449 0.493563 0.572186
46.0 0.613946 0.646647 0.380899
47.0 0.580874 0.482213 0.588160
48.0 0.765080 0.945197 0.488086
49.0 0.753427 0.834546 0.558714
50.0 0.593333 0.654756 0.678945
51.0 0.641995 0.661623 0.645383
52.0 0.736178 1.016155 0.453567
mae mse r2
HouseAge
1.0 0.193754 0.037540 NaN
2.0 0.679608 0.761965 0.566386
3.0 0.524016 0.545297 0.163305
4.0 0.485238 0.472876 0.548077
5.0 0.559706 0.838124 -0.083596
6.0 0.555477 0.910909 0.175588
7.0 0.444283 0.361410 0.118445
8.0 0.614306 0.814732 0.365634
9.0 0.460012 0.475838 0.403792
10.0 0.571608 0.529316 0.459435
11.0 0.533639 0.592044 0.370639
12.0 0.477705 0.445787 0.495840
13.0 0.541834 0.524872 0.486002
14.0 0.447129 0.379744 0.498232
15.0 0.504835 0.463720 0.559361
16.0 0.456473 0.369832 0.539451
17.0 0.501301 0.527441 0.438629
18.0 0.556046 0.542122 0.503522
19.0 0.568688 0.646944 0.413333
20.0 0.588009 0.679004 0.422192
21.0 0.646195 0.790413 0.413965
22.0 0.666628 0.965490 0.386062
23.0 0.595931 0.702324 0.515670
24.0 0.539087 0.603740 0.426324
25.0 0.670458 0.928963 0.324523
26.0 0.606856 0.801570 0.406394
27.0 0.678399 0.936663 0.303055
28.0 0.742805 1.193480 0.333126
29.0 0.576657 0.686531 0.446193
30.0 0.651604 0.847362 0.359334
31.0 0.672763 0.871524 0.360429
32.0 0.627263 0.827165 0.444991
33.0 0.621080 0.802157 0.342696
34.0 0.665736 0.949088 0.386077
35.0 0.641520 0.887327 0.355394
36.0 0.502494 0.576166 0.457979
37.0 0.653331 0.822596 0.414017
38.0 0.606862 0.792589 0.497771
39.0 0.726668 1.110587 0.334429
40.0 0.748082 1.116574 0.370500
41.0 0.626704 0.898465 0.410905
42.0 0.541726 0.623128 0.445546
43.0 0.600441 0.764316 0.375786
44.0 0.699280 0.865856 0.349379
45.0 0.553557 0.662126 0.426077
46.0 0.701446 0.877762 0.159628
47.0 0.574860 0.719396 0.385591
48.0 0.842336 1.354862 0.266214
49.0 0.830958 1.318246 0.302947
50.0 0.731482 1.142752 0.439659
51.0 0.946629 1.628087 0.127378
52.0 1.144136 2.213584 -0.190347

If we only compare two notebooks, the output is a bit different:

# only compare two notebooks
nbs_two = NotebookCollection(paths=files[:2], ids=ids[:2], scores=["r2"])

Comparing single-row tables includes a diff column with the error difference between experiments. Error reductions are showed in green, increments in red:

nbs_two["metrics"]
  random_forest_1 random_forest_2 diff diff_relative ratio
mae 0.338954 0.335678 -0.003276 -0.98% 0.990335
mse 0.266278 0.259645 -0.006633 -2.55% 0.975090
r2 0.800180 0.805158 0.004978 0.62% 1.006221
mae mse r2
0 0.338954 0.266278 0.80018
mae mse r2
0 0.335678 0.259645 0.805158

When comparing multi-row tables, the “Compare” tab appears, showing the difference between the tables:

nbs_two["houseage"]
Hide code cell output
  mae mse r2
HouseAge      
1.000000 0.187080 0.348531 nan
2.000000 -0.003635 -0.030077 0.017116
3.000000 0.023047 -0.011997 0.018408
4.000000 0.000658 -0.002911 0.002782
5.000000 0.021555 0.021945 -0.028372
6.000000 0.010497 -0.003568 0.003229
7.000000 -0.023116 -0.022284 0.054355
8.000000 -0.017129 -0.024388 0.018989
9.000000 0.010910 0.000840 -0.001053
10.000000 -0.006257 -0.013972 0.014270
11.000000 0.003158 0.008369 -0.008896
12.000000 -0.015298 -0.028640 0.032390
13.000000 0.008205 -0.008558 0.008381
14.000000 -0.005464 -0.015239 0.020136
15.000000 -0.006510 -0.020042 0.019044
16.000000 -0.009912 -0.010284 0.012807
17.000000 -0.001582 -0.002983 0.003176
18.000000 -0.002588 -0.001037 0.000949
19.000000 0.013105 0.000928 -0.000841
20.000000 -0.000714 -0.021691 0.018458
21.000000 -0.004226 -0.005608 0.004158
22.000000 -0.013385 -0.036627 0.023291
23.000000 0.004964 0.011387 -0.007852
24.000000 0.000059 -0.001098 0.001044
25.000000 -0.009635 -0.014748 0.010723
26.000000 -0.005468 -0.001626 0.001203
27.000000 -0.007084 -0.017090 0.012716
28.000000 -0.005659 -0.003742 0.002091
29.000000 -0.004365 0.007029 -0.005670
30.000000 -0.008290 -0.001199 0.000907
31.000000 -0.012995 -0.014689 0.010780
32.000000 0.009614 0.006674 -0.004478
33.000000 -0.010676 -0.007776 0.006372
34.000000 0.005598 0.007442 -0.004813
35.000000 -0.000996 -0.000969 0.000703
36.000000 0.000747 0.002878 -0.002707
37.000000 -0.014546 -0.017752 0.012645
38.000000 -0.003772 -0.002698 0.001709
39.000000 -0.016155 -0.033720 0.020208
40.000000 -0.019329 -0.020632 0.011633
41.000000 -0.001784 -0.001268 0.000832
42.000000 -0.005333 -0.005583 0.004967
43.000000 -0.003759 -0.014954 0.012213
44.000000 -0.004140 0.000962 -0.000722
45.000000 -0.016009 -0.008917 0.007729
46.000000 -0.003254 -0.004450 0.004261
47.000000 -0.001749 -0.016002 0.013667
48.000000 -0.008330 -0.005773 0.003127
49.000000 -0.014593 -0.054801 0.028978
50.000000 0.028423 0.050399 -0.024712
51.000000 0.025819 0.018217 -0.009763
52.000000 -0.000275 -0.007097 0.003816
mae mse r2
HouseAge
1.0 0.837960 0.702177 NaN
2.0 0.603115 0.670724 0.618309
3.0 0.468688 0.364688 0.440429
4.0 0.421817 0.378879 0.637909
5.0 0.377598 0.301233 0.610541
6.0 0.432726 0.731151 0.338277
7.0 0.368733 0.274706 0.329934
8.0 0.356222 0.244719 0.809457
9.0 0.288975 0.209516 0.737485
10.0 0.364628 0.295919 0.697792
11.0 0.376185 0.307549 0.673065
12.0 0.358564 0.276587 0.687196
13.0 0.350239 0.300623 0.705605
14.0 0.300519 0.177565 0.765378
15.0 0.352025 0.283976 0.730159
16.0 0.321841 0.190594 0.762655
17.0 0.351795 0.287565 0.693935
18.0 0.330103 0.232023 0.787512
19.0 0.327004 0.221794 0.798871
20.0 0.351433 0.285906 0.756704
21.0 0.367176 0.303935 0.774654
22.0 0.392445 0.390616 0.751614
23.0 0.292767 0.173997 0.880010
24.0 0.379270 0.295199 0.719500
25.0 0.376928 0.293372 0.786681
26.0 0.333518 0.224102 0.834041
27.0 0.353817 0.268162 0.800468
28.0 0.341323 0.307461 0.828202
29.0 0.294239 0.188852 0.847658
30.0 0.324308 0.273429 0.793268
31.0 0.330396 0.261934 0.807779
32.0 0.297834 0.238183 0.840184
33.0 0.342377 0.282937 0.768155
34.0 0.304721 0.266000 0.827936
35.0 0.288745 0.201500 0.853619
36.0 0.252276 0.137117 0.871008
37.0 0.319271 0.206182 0.853125
38.0 0.292509 0.182636 0.884272
39.0 0.362820 0.400139 0.760198
40.0 0.337796 0.285386 0.839105
41.0 0.287775 0.153817 0.899147
42.0 0.279633 0.201136 0.821031
43.0 0.324803 0.223944 0.817105
44.0 0.321689 0.206051 0.845169
45.0 0.290420 0.149500 0.870415
46.0 0.325370 0.272864 0.738759
47.0 0.287781 0.164253 0.859718
48.0 0.403556 0.353920 0.808319
49.0 0.389213 0.326508 0.827351
50.0 0.364305 0.286180 0.859673
51.0 0.236668 0.087948 0.952861
52.0 0.453691 0.461815 0.751661
mae mse r2
HouseAge
1.0 1.025040 1.050708 NaN
2.0 0.599480 0.640647 0.635425
3.0 0.491735 0.352691 0.458837
4.0 0.422475 0.375968 0.640691
5.0 0.399153 0.323178 0.582169
6.0 0.443223 0.727583 0.341506
7.0 0.345617 0.252422 0.384289
8.0 0.339093 0.220331 0.828446
9.0 0.299885 0.210356 0.736432
10.0 0.358371 0.281947 0.712062
11.0 0.379343 0.315918 0.664169
12.0 0.343266 0.247947 0.719586
13.0 0.358444 0.292065 0.713986
14.0 0.295055 0.162326 0.785514
15.0 0.345515 0.263934 0.749203
16.0 0.311929 0.180310 0.775462
17.0 0.350213 0.284582 0.697111
18.0 0.327515 0.230986 0.788461
19.0 0.340109 0.222722 0.798030
20.0 0.350719 0.264215 0.775162
21.0 0.362950 0.298327 0.778812
22.0 0.379060 0.353989 0.774905
23.0 0.297731 0.185384 0.872158
24.0 0.379329 0.294101 0.720544
25.0 0.367293 0.278624 0.797404
26.0 0.328050 0.222476 0.835244
27.0 0.346733 0.251072 0.813184
28.0 0.335664 0.303719 0.830293
29.0 0.289874 0.195881 0.841988
30.0 0.316018 0.272230 0.794175
31.0 0.317401 0.247245 0.818559
32.0 0.307448 0.244857 0.835706
33.0 0.331701 0.275161 0.774527
34.0 0.310319 0.273442 0.823123
35.0 0.287749 0.200531 0.854322
36.0 0.253023 0.139995 0.868301
37.0 0.304725 0.188430 0.865770
38.0 0.288737 0.179938 0.885981
39.0 0.346665 0.366419 0.780406
40.0 0.318467 0.264754 0.850738
41.0 0.285991 0.152549 0.899979
42.0 0.274300 0.195553 0.825998
43.0 0.321044 0.208990 0.829318
44.0 0.317549 0.207013 0.844447
45.0 0.274411 0.140583 0.878144
46.0 0.322116 0.268414 0.743020
47.0 0.286032 0.148251 0.873385
48.0 0.395226 0.348147 0.811446
49.0 0.374620 0.271707 0.856329
50.0 0.392728 0.336579 0.834961
51.0 0.262487 0.106165 0.943098
52.0 0.453416 0.454718 0.755477

When displaying dictionaries, a “Compare” tab shows with a diff view:

nbs_two["model_params"]
f1{f1{
2    'bootstrap': True,2    'bootstrap': True,
3    'ccp_alpha': 0.0,3    'ccp_alpha': 0.0,
4    'criterion': 'squared_error',4    'criterion': 'squared_error',
5    'max_depth': None,5    'max_depth': None,
6    'max_features': 1.0,6    'max_features': 1.0,
7    'max_leaf_nodes': None,7    'max_leaf_nodes': None,
8    'max_samples': None,8    'max_samples': None,
9    'min_impurity_decrease': 0.0,9    'min_impurity_decrease': 0.0,
10    'min_samples_leaf': 1,10    'min_samples_leaf': 1,
11    'min_samples_split': 2,11    'min_samples_split': 2,
12    'min_weight_fraction_leaf': 0.0,12    'min_weight_fraction_leaf': 0.0,
t13    'n_estimators': 50,t13    'n_estimators': 100,
14    'n_jobs': None,14    'n_jobs': None,
15    'oob_score': False,15    'oob_score': False,
16    'random_state': None,16    'random_state': None,
17    'verbose': 0,17    'verbose': 0,
18    'warm_start': False,18    'warm_start': False,
19}19}
Legends
Colors
 Added 
Changed
Deleted
Links
(f)irst change
(n)ext change
(t)op
{
    'bootstrap': True,
    'ccp_alpha': 0.0,
    'criterion': 'squared_error',
    'max_depth': None,
    'max_features': 1.0,
    'max_leaf_nodes': None,
    'max_samples': None,
    'min_impurity_decrease': 0.0,
    'min_samples_leaf': 1,
    'min_samples_split': 2,
    'min_weight_fraction_leaf': 0.0,
    'n_estimators': 50,
    'n_jobs': None,
    'oob_score': False,
    'random_state': None,
    'verbose': 0,
    'warm_start': False,
}
{
    'bootstrap': True,
    'ccp_alpha': 0.0,
    'criterion': 'squared_error',
    'max_depth': None,
    'max_features': 1.0,
    'max_leaf_nodes': None,
    'max_samples': None,
    'min_impurity_decrease': 0.0,
    'min_samples_leaf': 1,
    'min_samples_split': 2,
    'min_weight_fraction_leaf': 0.0,
    'n_estimators': 100,
    'n_jobs': None,
    'oob_score': False,
    'random_state': None,
    'verbose': 0,
    'warm_start': False,
}

Lists (and sets) are compared based on elements existence:

nbs_two["feature_names"]
Both Only in random_forest_1 Only in random_forest_2
AveBedrms
AveOccup
AveRooms
HouseAge
Latitude
Longitude
MedInc
Population
['MedInc', 'HouseAge', 'AveRooms', 'AveBedrms', 'Population', 'AveOccup', 'Latitude', 'Longitude']
['MedInc', 'HouseAge', 'AveRooms', 'AveBedrms', 'Population', 'AveOccup', 'Latitude', 'Longitude']

Using the mapping interface#

NotebookCollection has a dict-like interface, you can retrieve data from individual notebooks:

nbs["model_params"]["random_forest_1"]
{'bootstrap': True,
 'ccp_alpha': 0.0,
 'criterion': 'squared_error',
 'max_depth': None,
 'max_features': 1.0,
 'max_leaf_nodes': None,
 'max_samples': None,
 'min_impurity_decrease': 0.0,
 'min_samples_leaf': 1,
 'min_samples_split': 2,
 'min_weight_fraction_leaf': 0.0,
 'n_estimators': 50,
 'n_jobs': None,
 'oob_score': False,
 'random_state': None,
 'verbose': 0,
 'warm_start': False}
nbs["plot"]["random_forest_2"]
../_images/ce2fbc3e1a3cb7c9d52917cab25c18cbac05c72262139f7cde6477c822187897.png