Showing 14 of 71 files from the diff.
Other files ignored by Codecov
doc/Makefile has changed.
arviz/__init__.py has changed.
.gitignore has changed.
CONTRIBUTING.md has changed.
doc/api.rst was deleted.
CHANGELOG.md has changed.

@@ -296,7 +296,7 @@
Loading
296 296
    """Convert Pyro data into an InferenceData object.
297 297
298 298
    For a usage example read the
299 -
    :doc:`Cookbook section on from_pyro </notebooks/InferenceDataCookbook>`
299 +
    :ref:`Cookbook section on from_pyro <cookbook>`
300 300
301 301
    Parameters
302 302
    ----------

@@ -153,12 +153,19 @@
Loading
153 153
        >>> az.plot_ppc(data, kind='cumulative')
154 154
155 155
    Use the coords and flatten parameters to plot selected variable dimensions
156 -
    across multiple plots.
156 +
    across multiple plots. We will now modify the dimension `obs_id` to contain
157 +
    indicate the name of the county where the measure was taken. The change has to
158 +
    be done on both ``posterior_predictive`` and ``observed_data`` groups, which is
159 +
    why we will use :meth:`~arviz.InferenceData.map` to apply the same function to
160 +
    both groups. Afterwards, we will select the counties to be plotted with the
161 +
    ``coords`` arg.
157 162
158 163
    .. plot::
159 164
        :context: close-figs
160 165
161 -
        >>> az.plot_ppc(data, coords={'observed_county': ['ANOKA', 'BELTRAMI']}, flatten=[])
166 +
        >>> obs_county = data.posterior["County"][data.constant_data["county_idx"]]
167 +
        >>> data = data.assign_coords(obs_id=obs_county, groups="observed_vars")
168 +
        >>> az.plot_ppc(data, coords={'obs_id': ['ANOKA', 'BELTRAMI']}, flatten=[])
162 169
163 170
    Plot the overlay using a stacked scatter plot that is particularly useful
164 171
    when the sample sizes are small.
@@ -167,7 +174,7 @@
Loading
167 174
        :context: close-figs
168 175
169 176
        >>> az.plot_ppc(data, kind='scatter', flatten=[],
170 -
        >>>             coords={'observed_county': ['AITKIN', 'BELTRAMI']})
177 +
        >>>             coords={'obs_id': ['AITKIN', 'BELTRAMI']})
171 178
172 179
    Plot random posterior predictive sub-samples.
173 180

@@ -498,7 +498,7 @@
Loading
498 498
    All three of them are optional arguments, but at least one of ``trace``,
499 499
    ``prior`` and ``posterior_predictive`` must be present.
500 500
    For a usage example read the
501 -
    :doc:`Cookbook section on from_pymc3 </notebooks/InferenceDataCookbook>`
501 +
    :ref:`Cookbook section on from_pymc3 <cookbook>`
502 502
503 503
    Parameters
504 504
    ----------

@@ -797,7 +797,7 @@
Loading
797 797
    """Convert CmdStan data into an InferenceData object.
798 798
799 799
    For a usage example read the
800 -
    :doc:`Cookbook section on from_cmdstan </notebooks/InferenceDataCookbook>`
800 +
    :ref:`Cookbook section on from_cmdstan <cookbook>`
801 801
802 802
    Parameters
803 803
    ----------

@@ -9,7 +9,7 @@
Loading
9 9
10 10
    See the documentation on  :class:`~arviz.SamplingWrapper` for a more detailed
11 11
    description. An example of ``PyStanSamplingWrapper`` usage can be found
12 -
    in the :doc:`pystan_refitting <../notebooks/pystan_refitting>`.
12 +
    in the :ref:`pystan_refitting` notebook.
13 13
14 14
    Warnings
15 15
    --------

@@ -858,7 +858,7 @@
Loading
858 858
    """Convert PyStan data into an InferenceData object.
859 859
860 860
    For a usage example read the
861 -
    :doc:`Cookbook section on from_pystan </notebooks/InferenceDataCookbook>`
861 +
    :ref:`Cookbook section on from_pystan <cookbook>`
862 862
863 863
    Parameters
864 864
    ----------

@@ -303,7 +303,7 @@
Loading
303 303
    """Convert Dictionary data into an InferenceData object.
304 304
305 305
    For a usage example read the
306 -
    :doc:`Cookbook section on from_dict </notebooks/InferenceDataCookbook>`
306 +
    :ref:`Cookbook section on from_dict <cookbook>`
307 307
308 308
    Parameters
309 309
    ----------

@@ -446,9 +446,9 @@
Loading
446 446
447 447
    .. ipython::
448 448
449 -
        In [1]: az.hdi(data, input_core_dims = [["chain","draw", "school"]])
449 +
        In [1]: az.hdi(data, var_names="theta", input_core_dims = [["chain","draw", "school"]])
450 450
451 -
    We can also calculate the hdi over a particular selection over all groups:
451 +
    We can also calculate the hdi over a particular selection:
452 452
453 453
    .. ipython::
454 454

@@ -304,7 +304,7 @@
Loading
304 304
    """Convert NumPyro data into an InferenceData object.
305 305
306 306
    For a usage example read the
307 -
    :doc:`Cookbook section on from_numpyro </notebooks/InferenceDataCookbook>`
307 +
    :ref:`Cookbook section on from_numpyro <cookbook>`
308 308
309 309
    Parameters
310 310
    ----------

@@ -487,7 +487,7 @@
Loading
487 487
    """Convert CmdStanPy data into an InferenceData object.
488 488
489 489
    For a usage example read the
490 -
    :doc:`Cookbook section on from_cmdstanpy </notebooks/InferenceDataCookbook>`
490 +
    :ref:`Cookbook section on from_cmdstanpy <cookbook>`
491 491
492 492
    Parameters
493 493
    ----------

@@ -151,16 +151,18 @@
Loading
151 151
    ----------
152 152
    obj : dict, str, np.ndarray, xr.Dataset, pystan fit, pymc3 trace
153 153
        A supported object to convert to InferenceData:
154 -
            InferenceData: returns unchanged
155 -
            str: Attempts to load the netcdf dataset from disk
156 -
            pystan fit: Automatically extracts data
157 -
            pymc3 trace: Automatically extracts data
158 -
            xarray.Dataset: adds to InferenceData as only group
159 -
            xarray.DataArray: creates an xarray dataset as the only group, gives the
160 -
                         array an arbitrary name, if name not set
161 -
            dict: creates an xarray dataset as the only group
162 -
            numpy array: creates an xarray dataset as the only group, gives the
163 -
                         array an arbitrary name
154 +
155 +
        - InferenceData: returns unchanged
156 +
        - str: Attempts to load the netcdf dataset from disk
157 +
        - pystan fit: Automatically extracts data
158 +
        - pymc3 trace: Automatically extracts data
159 +
        - xarray.Dataset: adds to InferenceData as only group
160 +
        - xarray.DataArray: creates an xarray dataset as the only group, gives the
161 +
          array an arbitrary name, if name not set
162 +
        - dict: creates an xarray dataset as the only group
163 +
        - numpy array: creates an xarray dataset as the only group, gives the
164 +
          array an arbitrary name
165 +
164 166
    group : str
165 167
        If `obj` is a dict or numpy array, assigns the resulting xarray
166 168
        dataset to this group.

@@ -7,6 +7,15 @@
Loading
7 7
8 8
from ...rcparams import rcParams
9 9
10 +
__all__ = [
11 +
    "to_cds",
12 +
    "output_notebook",
13 +
    "output_file",
14 +
    "ColumnDataSource",
15 +
    "create_layout",
16 +
    "show_layout",
17 +
]
18 +
10 19
11 20
def to_cds(
12 21
    data,
@@ -19,7 +28,7 @@
Loading
19 28
):
20 29
    """Transform data to ColumnDataSource (CDS) compatible with Bokeh.
21 30
22 -
    Uses `_ARVIZ_GROUP_` and `_ARVIZ_CDS_SELECTION_`to separate var_name
31 +
    Uses `_ARVIZ_GROUP_` and `_ARVIZ_CDS_SELECTION_` to separate var_name
23 32
    from group and dimensions in CDS columns.
24 33
25 34
    Parameters
@@ -32,10 +41,12 @@
Loading
32 41
    groups : str or list of str, optional
33 42
        Select groups for CDS. Default groups are {"posterior_groups", "prior_groups",
34 43
        "posterior_groups_warmup"}
35 -
            - posterior_groups: posterior, posterior_predictive, sample_stats
36 -
            - prior_groups: prior, prior_predictive, sample_stats_prior
37 -
            - posterior_groups_warmup: warmup_posterior, warmup_posterior_predictive,
38 -
                                       warmup_sample_stats
44 +
45 +
        - posterior_groups: posterior, posterior_predictive, sample_stats
46 +
        - prior_groups: prior, prior_predictive, sample_stats_prior
47 +
        - posterior_groups_warmup: warmup_posterior, warmup_posterior_predictive,
48 +
          warmup_sample_stats
49 +
39 50
    ignore_groups : str or list of str, optional
40 51
        Ignore specific groups from CDS.
41 52
    dimension : str, or list of str, optional
@@ -45,25 +56,31 @@
Loading
45 56
    var_name_format : str or tuple of tuple of string, optional
46 57
        Select column name format for non-scalar input.
47 58
        Predefined options are {"brackets", "underscore", "cds"}
59 +
48 60
            "brackets":
49 -
                - add_group_info == False: theta[0,0]
50 -
                - add_group_info == True: theta_posterior[0,0]
61 +
                - add_group_info == False: ``theta[0,0]``
62 +
                - add_group_info == True: ``theta_posterior[0,0]``
51 63
            "underscore":
52 -
                - add_group_info == False: theta_0_0
53 -
                - add_group_info == True: theta_posterior_0_0_
64 +
                - add_group_info == False: ``theta_0_0``
65 +
                - add_group_info == True: ``theta_posterior_0_0_``
54 66
            "cds":
55 -
                - add_group_info == False: theta_ARVIZ_CDS_SELECTION_0_0
56 -
                - add_group_info == True: theta_ARVIZ_GROUP_posterior__ARVIZ_CDS_SELECTION_0_0
67 +
                - add_group_info == False: ``theta_ARVIZ_CDS_SELECTION_0_0``
68 +
                - add_group_info == True: ``theta_ARVIZ_GROUP_posterior__ARVIZ_CDS_SELECTION_0_0``
57 69
            tuple:
58 70
                Structure:
59 -
                    tuple: (dim_info, group_info)
60 -
                        dim_info: (str: `.join` separator,
61 -
                                   str: dim_separator_start,
62 -
                                   str: dim_separator_end)
63 -
                        group_info: (str: group separator start, str: group separator end)
71 +
72 +
                    - tuple: (dim_info, group_info)
73 +
74 +
                        - dim_info: (str: `.join` separator,
75 +
                          str: dim_separator_start,
76 +
                          str: dim_separator_end)
77 +
                        - group_info: (str: group separator start, str: group separator end)
78 +
64 79
                Example: ((",", "[", "]"), ("_", ""))
65 -
                    - add_group_info == False: theta[0,0]
66 -
                    - add_group_info == True: theta_posterior[0,0]
80 +
81 +
                    - add_group_info == False: ``theta[0,0]``
82 +
                    - add_group_info == True: ``theta_posterior[0,0]``
83 +
67 84
    index_origin : int, optional
68 85
        Start parameter indices from `index_origin`. Either 0 or 1.
69 86

@@ -53,7 +53,7 @@
Loading
53 53
    """Container for inference data storage using xarray.
54 54
55 55
    For a detailed introduction to ``InferenceData`` objects and their usage, see
56 -
    :doc:`/notebooks/XarrayforArviZ`. This page provides help and documentation
56 +
    :ref:`xarray_for_arviz`. This page provides help and documentation
57 57
    on ``InferenceData`` methods and their low level implementation.
58 58
    """
59 59

@@ -314,7 +314,7 @@
Loading
314 314
    Takes a python dictionary of samples that has been generated by the sample
315 315
    method of a model instance and returns an Arviz inference data object.
316 316
    For a usage example read the
317 -
    :doc:`Cookbook section on from_pyjags </notebooks/InferenceDataCookbook>`
317 +
    :ref:`Cookbook section on from_pyjags <cookbook>`
318 318
319 319
    Parameters
320 320
    ----------
Files Coverage
arviz 91.57%
Project Totals (105 files) 91.57%
Python 3.7
Build #20201015.12 -
Python 3.6
Build #20201015.12 -
Python 3.8
Build #20201015.12 -
Python 3.8
Build #20201015.12 -
External special
Build #20201015.12 -
External latest
Build #20201015.12 -
1
ignore:
2
    - arviz/tests/
3

4
codecov:
5
    notify:
6
        after_n_builds: 6
7

8
comment:
9
    behavior: default
10
    branches:
11
        - "master"
12

13
coverage:
14
  status:
15
    project:
16
      default:
17
        target: auto
18
        threshold: 2%
19

20
    patch:
21
      default:
22
        target: 75%
Sunburst
The inner-most circle is the entire project, moving away from the center are folders then, finally, a single file. The size and color of each slice is representing the number of statements and the coverage, respectively.
Icicle
The top section represents the entire project. Proceeding with folders and finally individual files. The size and color of each slice is representing the number of statements and the coverage, respectively.
Grid
Each block represents a single file in the project. The size and color of each block is represented by the number of statements and the coverage, respectively.
Loading