allensdk.brain_observatory package

Submodules

allensdk.brain_observatory.brain_observatory_exceptions module

exception allensdk.brain_observatory.brain_observatory_exceptions.BrainObservatoryAnalysisException[source]

Bases: exceptions.Exception

exception allensdk.brain_observatory.brain_observatory_exceptions.MissingStimulusException[source]

Bases: exceptions.Exception

exception allensdk.brain_observatory.brain_observatory_exceptions.NoEyeTrackingException[source]

Bases: exceptions.Exception

allensdk.brain_observatory.brain_observatory_plotting module

allensdk.brain_observatory.brain_observatory_plotting.plot_drifting_grating_traces(dg, save_dir)[source]

saves figures with a Ori X TF grid of mean resposes

allensdk.brain_observatory.brain_observatory_plotting.plot_lsn_traces(lsn, save_dir, suffix='')[source]
allensdk.brain_observatory.brain_observatory_plotting.plot_ns_traces(nsa, save_dir)[source]
allensdk.brain_observatory.brain_observatory_plotting.plot_running_a(dg, nm1, nm3, save_dir)[source]
allensdk.brain_observatory.brain_observatory_plotting.plot_sg_traces(sg, save_dir)[source]

allensdk.brain_observatory.circle_plots module

class allensdk.brain_observatory.circle_plots.CoronaPlotter(angle_start=270, plot_scale=1.2, inner_radius=0.3, *args, **kwargs)[source]

Bases: allensdk.brain_observatory.circle_plots.PolarPlotter

infer_dims(category_data)[source]
plot(category_data, data=None, clim=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>)[source]
set_dims(categories)[source]
show_arrow(color=None)[source]
class allensdk.brain_observatory.circle_plots.FanPlotter(group_scale=0.9, *args, **kwargs)[source]

Bases: allensdk.brain_observatory.circle_plots.PolarPlotter

static for_drifting_gratings()[source]
static for_static_gratings()[source]
infer_dims(r_data, angle_data, group_data)[source]
plot(r_data, angle_data, group_data=None, data=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>, clim=None, rmap=None, rlim=None, axis_color=None, label_color=None)[source]
set_dims(rs, angles, groups)[source]
show_angle_labels(angles=None, labels=None, color=None, offset=0.05, fontdict=None)[source]
show_axes(angles=None, radii=None, closed=False, color=None)[source]
show_group_labels(groups=None, color=None, fontdict=None)[source]
show_r_labels(radii=None, labels=None, color=None, offset=0.1, fontdict=None)[source]
class allensdk.brain_observatory.circle_plots.PolarPlotter(direction=-1, angle_start=0, circle_scale=1.1, inner_radius=None, plot_center=(0.0, 0.0), plot_scale=0.9)[source]

Bases: object

DIR_CCW = 1
DIR_CW = -1
finalize()[source]
class allensdk.brain_observatory.circle_plots.TrackPlotter(direction=-1, angle_start=270.0, inner_radius=0.45, ring_length=None, *args, **kwargs)[source]

Bases: allensdk.brain_observatory.circle_plots.PolarPlotter

plot(data, clim=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>, mean_cmap=<matplotlib.colors.LinearSegmentedColormap object>, norm=None)[source]
show_arrow(color=None)[source]
allensdk.brain_observatory.circle_plots.add_angle_labels(ax, angles, labels, radius, color=None, fontdict=None, offset=0.05)[source]
allensdk.brain_observatory.circle_plots.add_arrow(ax, radius, start_angle, end_angle, color=None, width=18.0)[source]
allensdk.brain_observatory.circle_plots.angle_lines(angles, inner_radius, outer_radius)[source]
allensdk.brain_observatory.circle_plots.build_hex_pack(n)[source]
allensdk.brain_observatory.circle_plots.hex_pack(radius, n)[source]
allensdk.brain_observatory.circle_plots.make_pincushion_plot(data, trials, on, nrows, ncols, clim=None, color_map=None, radius=None)[source]
allensdk.brain_observatory.circle_plots.polar_line_circles(radii, theta, start_r=0)[source]
allensdk.brain_observatory.circle_plots.polar_linspace(radius, start_angle, stop_angle, num, endpoint=False, degrees=True)[source]

Evenly distributed list of x,y coordinates from an input range of angles and a radius in polar coordinates.

allensdk.brain_observatory.circle_plots.polar_to_xy(angles, radius)[source]

Convert an array of angles (in radians) and a radius in polar coordinates to an array of x,y coordinates.

allensdk.brain_observatory.circle_plots.radial_arcs(rs, start_theta, end_theta)[source]
allensdk.brain_observatory.circle_plots.radial_circles(rs)[source]
allensdk.brain_observatory.circle_plots.reset_hex_pack()[source]
allensdk.brain_observatory.circle_plots.rings_in_hex_pack(ct)[source]
allensdk.brain_observatory.circle_plots.spiral_trials(radii, x=0.0, y=0.0)[source]
allensdk.brain_observatory.circle_plots.spiral_trials_polar(r, theta, radii, offset=None)[source]
allensdk.brain_observatory.circle_plots.wedge_ring(N, inner_radius, outer_radius, start=0, stop=360)[source]

allensdk.brain_observatory.demixer module

allensdk.brain_observatory.demixer.demix_time_dep_masks(raw_traces, stack, masks)[source]
Parameters:
  • raw_traces – extracted traces
  • stack – movie (same length as traces)
  • masks – binary roi masks
Returns:

demixed traces

allensdk.brain_observatory.demixer.find_negative_baselines(trace)[source]
allensdk.brain_observatory.demixer.find_negative_transients_threshold(trace, window=500, length=10, std_devs=3)[source]
allensdk.brain_observatory.demixer.find_zero_baselines(traces)[source]
allensdk.brain_observatory.demixer.plot_negative_baselines(raw_traces, demix_traces, mask_array, roi_ids_mask, plot_dir, ext='png')[source]
allensdk.brain_observatory.demixer.plot_negative_transients(raw_traces, demix_traces, valid_roi, mask_array, roi_ids_mask, plot_dir, ext='png')[source]
allensdk.brain_observatory.demixer.plot_overlap_masks_lengthOne(roi_ind, masks, savefile=None, weighted=False)[source]
allensdk.brain_observatory.demixer.plot_traces(raw_trace, demix_trace, roi_id, roi_ind, save_file)[source]
allensdk.brain_observatory.demixer.plot_transients(roi_ind, t_trans, masks, traces, demix_traces, savefile)[source]
allensdk.brain_observatory.demixer.rolling_window(trace, window=500)[source]
Parameters:
  • trace
  • window
Returns:

allensdk.brain_observatory.dff module

allensdk.brain_observatory.dff.compute_dff(traces, save_plot_dir=None, mode_kernelsize=5400, mean_kernelsize=3000)[source]

Compute dF/F of a set of traces using a low-pass windowed-mode operator. The operation is basically:

T_mm = windowed_mean(windowed_mode(T))

T_dff = (T - T_mm) / T_mm

Parameters:

traces: np.ndarray

2D array of traces to be analyzed

Returns:

np.ndarray with the same shape as the input array.

allensdk.brain_observatory.dff.main()[source]
allensdk.brain_observatory.dff.movingaverage(x, kernelsize, y)[source]

Compute the windowed average of an array.

Parameters:

x: np.ndarray

Array to be analyzed

kernelsize: int

Size of the moving window

y: np.ndarray

Output array to store the results

allensdk.brain_observatory.dff.movingmode_fast(x, kernelsize, y)[source]

Compute the windowed mode of an array. A running mode is initialized with a histogram of values over the initial kernelsize/2 values. The mode is then updated as the kernel moves by adding and subtracting values from the histogram.

Parameters:

x: np.ndarray

Array to be analyzed

kernelsize: int

Size of the moving window

y: np.ndarray

Output array to store the results

allensdk.brain_observatory.dff.plot_onetrace(dff, fc)[source]

Debug plotting function

allensdk.brain_observatory.drifting_gratings module

class allensdk.brain_observatory.drifting_gratings.DriftingGratings(data_set, **kwargs)[source]

Bases: allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis

Perform tuning analysis specific to drifting gratings stimulus.

Parameters:data_set: BrainObservatoryNwbDataSet object
static from_analysis_file(data_set, analysis_file)[source]
get_noise_correlation(corr='spearman')[source]
get_peak()[source]

Computes metrics related to each cell’s peak response condition.

Returns:

Pandas data frame containing the following columns (_dg suffix is

for drifting grating):

  • ori_dg (orientation)
  • tf_dg (temporal frequency)
  • reliability_dg
  • osi_dg (orientation selectivity index)
  • dsi_dg (direction selectivity index)
  • peak_dff_dg (peak dF/F)
  • ptest_dg
  • p_run_dg
  • run_modulation_dg
  • cv_dg (circular variance)
get_representational_similarity(corr='spearman')[source]
get_response()[source]

Computes the mean response for each cell to each stimulus condition. Return is a (# orientations, # temporal frequencies, # cells, 3) np.ndarray. The final dimension contains the mean response to the condition (index 0), standard error of the mean of the response to the condition (index 1), and the number of trials with a significant response (p < 0.05) to that condition (index 2).

Returns:Numpy array storing mean responses.
get_signal_correlation(corr='spearman')[source]
number_ori
number_tf
open_star_plot(cell_specimen_id=None, include_labels=False, cell_index=None)[source]
orivals
plot_direction_selectivity(si_range=[0, 1.5], n_hist_bins=50, color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_orientation_selectivity(si_range=[0, 1.5], n_hist_bins=50, color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_preferred_direction(include_labels=False, si_range=[0, 1.5], color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_preferred_temporal_frequency(si_range=[0, 1.5], color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
populate_stimulus_table()[source]
reshape_response_array()[source]
Returns:response array in cells x stim x repetition for noise correlations
tfvals

allensdk.brain_observatory.findlevel module

allensdk.brain_observatory.findlevel.findlevel(inwave, threshold, direction='both')[source]

allensdk.brain_observatory.locally_sparse_noise module

class allensdk.brain_observatory.locally_sparse_noise.LocallySparseNoise(data_set, stimulus=None, **kwargs)[source]

Bases: allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis

Perform tuning analysis specific to the locally sparse noise stimulus.

Parameters:

data_set: BrainObservatoryNwbDataSet object

stimulus: string

Name of locally sparse noise stimulus. See brain_observatory.stimulus_info.

nrows: int

Number of rows in the stimulus template

ncol: int

Number of columns in the stimulus template

LSN
LSN_GREY = 127
LSN_OFF = 0
LSN_OFF_SCREEN = 64
LSN_ON = 255
LSN_mask
cell_index_receptive_field_analysis_data
extralength
static from_analysis_file(data_set, analysis_file, stimulus)[source]
get_mean_response()[source]
get_peak()[source]
get_receptive_field()[source]

Calculates receptive fields for each cell

get_receptive_field_analysis_data()[source]

Calculates receptive fields for each cell

get_receptive_field_attribute_df()[source]
interlength
mean_response
static merge_mean_response(rc1, rc2)[source]

Move out of this class, to session analysis

open_pincushion_plot(on, cell_specimen_id=None, color_map=None, cell_index=None)[source]
plot_cell_receptive_field(on, cell_specimen_id=None, color_map=None, clim=None, mask=None, cell_index=None, scalebar=True)[source]
plot_population_receptive_field(color_map='RdPu', clim=None, mask=None, scalebar=True)[source]
plot_receptive_field_analysis_data(cell_index, **kwargs)[source]
populate_stimulus_table()[source]
static read_cell_index_receptive_field_analysis(file_handle, prefix, path=None)[source]
receptive_field
static save_cell_index_receptive_field_analysis(cell_index_receptive_field_analysis_data, new_nwb, prefix)[source]
sort_trials()[source]
sweeplength

allensdk.brain_observatory.natural_movie module

class allensdk.brain_observatory.natural_movie.NaturalMovie(data_set, movie_name, **kwargs)[source]

Bases: allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis

Perform tuning analysis specific to natural movie stimulus.

Parameters:

data_set: BrainObservatoryNwbDataSet object

movie_name: string

one of [ stimulus_info.NATURAL_MOVIE_ONE, stimulus_info.NATURAL_MOVIE_TWO,

stimulus_info.NATURAL_MOVIE_THREE ]

static from_analysis_file(data_set, analysis_file, movie_name)[source]
get_peak()[source]

Computes properties of the peak response condition for each cell.

Returns:

Pandas data frame with the below fields. A suffix of “nm1”, “nm2” or “nm3” is appended to the field name depending

on which of three movie clips was presented.

  • peak_nm1 (frame with peak response)
  • response_variability_nm1
get_sweep_response()[source]

Returns the dF/F response for each cell

Returns:Numpy array
open_track_plot(cell_specimen_id=None, cell_index=None)[source]
populate_stimulus_table()[source]
sweep_response
sweeplength

allensdk.brain_observatory.natural_scenes module

class allensdk.brain_observatory.natural_scenes.NaturalScenes(data_set, **kwargs)[source]

Bases: allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis

Perform tuning analysis specific to natural scenes stimulus.

Parameters:data_set: BrainObservatoryNwbDataSet object
extralength
static from_analysis_file(data_set, analysis_file)[source]
get_noise_correlation(corr='spearman')[source]
get_peak()[source]

Computes metrics about peak response condition for each cell.

Returns:

Pandas data frame with the following fields (‘_ns’ suffix is for

natural scene):

  • scene_ns (scene number)
  • reliability_ns
  • peak_dff_ns (peak dF/F)
  • ptest_ns
  • p_run_ns
  • run_modulation_ns
  • time_to_peak_ns
get_representational_similarity(corr='spearman')[source]
get_response()[source]

Computes the mean response for each cell to each stimulus condition. Return is a (# scenes, # cells, 3) np.ndarray. The final dimension contains the mean response to the condition (index 0), standard error of the mean of the response to the condition (index 1), and the number of trials with a significant (p < 0.05) response to that condition (index 2).

Returns:Numpy array storing mean responses.
get_signal_correlation(corr='spearman')[source]
interlength
number_scenes
open_corona_plot(cell_specimen_id=None, cell_index=None)[source]
plot_time_to_peak(p_value_max=0.05, color_map=<matplotlib.colors.LinearSegmentedColormap object>)[source]
populate_stimulus_table()[source]
reshape_response_array()[source]
Returns:response array in cells x stim x repetition for noise correlations
sweeplength

allensdk.brain_observatory.observatory_plots module

class allensdk.brain_observatory.observatory_plots.DimensionPatchHandler(vals, start_color, end_color, *args, **kwargs)[source]

Bases: object

dim_color(index)[source]
legend_artist(legend, orig_handle, fontsize, handlebox)[source]
allensdk.brain_observatory.observatory_plots.figure_in_px(*args, **kwds)[source]
allensdk.brain_observatory.observatory_plots.finalize_no_axes(pad=0.0)[source]
allensdk.brain_observatory.observatory_plots.finalize_no_labels(pad=0.3, legend=False)[source]
allensdk.brain_observatory.observatory_plots.finalize_with_axes(pad=0.3)[source]
allensdk.brain_observatory.observatory_plots.float_label(n)[source]
allensdk.brain_observatory.observatory_plots.plot_cell_correlation(sig_corrs, labels, colors, scale=15)[source]
allensdk.brain_observatory.observatory_plots.plot_combined_speed(binned_resp_vis, binned_dx_vis, binned_resp_sp, binned_dx_sp, evoked_color, spont_color)[source]
allensdk.brain_observatory.observatory_plots.plot_condition_histogram(vals, bins, color='#ccccdd')[source]
allensdk.brain_observatory.observatory_plots.plot_mask_outline(mask, ax, color='k')[source]
allensdk.brain_observatory.observatory_plots.plot_pupil_location(xy_deg, s=1, c=None, cmap=<matplotlib.colors.LinearSegmentedColormap object>, edgecolor='', include_labels=True)[source]
allensdk.brain_observatory.observatory_plots.plot_radial_histogram(angles, counts, all_angles=None, include_labels=False, offset=180.0, direction=-1, closed=False, color='#ccccdd')[source]
allensdk.brain_observatory.observatory_plots.plot_receptive_field(rf, color_map=None, clim=None, mask=None, outline_color='#cccccc', scalebar=True)[source]
allensdk.brain_observatory.observatory_plots.plot_representational_similarity(rs, dims=None, dim_labels=None, colors=None, dim_order=None, labels=True)[source]
allensdk.brain_observatory.observatory_plots.plot_selectivity_cumulative_histogram(sis, xlabel, si_range=[0, 1.5], n_hist_bins=50, color='#ccccdd')[source]
allensdk.brain_observatory.observatory_plots.plot_speed(binned_resp, binned_dx, num_bins, color)[source]
allensdk.brain_observatory.observatory_plots.plot_time_to_peak(msrs, ttps, t_start, t_end, stim_start, stim_end, cmap)[source]
allensdk.brain_observatory.observatory_plots.population_correlation_scatter(sig_corrs, noise_corrs, labels, colors, scale=15)[source]

allensdk.brain_observatory.r_neuropil module

class allensdk.brain_observatory.r_neuropil.NeuropilSubtract(lam=0.05, dt=1.0, folds=4)[source]

Bases: object

TODO: docs

estimate_error(r)[source]

Estimate error values for a given r for each fold and return the mean.

fit(r_range=[0.0, 2.0], iterations=3, dr=0.1, dr_factor=0.1)[source]

Estimate error values for a range of r values. Identify a new r range around the minimum error values and repeat multiple times. TODO: docs

fit_block_coordinate_desc(r_init=5.0, min_delta_r=1e-08)[source]
set_F(F_M, F_N)[source]

Break the F_M and F_N traces into the number of folds specified in the class constructor and normalize each fold of F_M and R_N relative to F_N.

allensdk.brain_observatory.r_neuropil.ab_from_T(T, lam, dt)[source]
allensdk.brain_observatory.r_neuropil.ab_from_diagonals(mat_dict)[source]

Constructs value for scipy.linalg.solve_banded

Parameters:mat_dict: dictionary of diagonals keyed by offsets
Returns:ab: value for scipy.linalg.solve_banded
allensdk.brain_observatory.r_neuropil.alpha_filter(A=1.0, alpha=0.05, beta=0.25, T=100)[source]
allensdk.brain_observatory.r_neuropil.error_calc(F_M, F_N, F_C, r)[source]
allensdk.brain_observatory.r_neuropil.error_calc_outlier(F_M, F_N, F_C, r)[source]
allensdk.brain_observatory.r_neuropil.estimate_contamination_ratios(F_M, F_N, lam=0.05, folds=4, iterations=3, r_range=[0.0, 2.0], dr=0.1, dr_factor=0.1)[source]

Calculates neuropil contamination of ROI

Parameters:

F_M: ROI trace

F_N: Neuropil trace

Returns:

dictionary: key-value pairs

  • ‘r’: the contamination ratio – corrected trace = M - r*N
  • ‘err’: RMS error
  • ‘min_error’: minimum error
  • ‘bounds_error’: boolean. True if error or R are outside tolerance
allensdk.brain_observatory.r_neuropil.get_diagonals_from_sparse(mat)[source]

Returns a dictionary of diagonals keyed by offsets

Parameters:mat: scipy.sparse matrix
Returns:dictionary: diagonals keyed by offsets
allensdk.brain_observatory.r_neuropil.normalize_F(F_M, F_N)[source]
allensdk.brain_observatory.r_neuropil.synthesize_F(T, af1, af2, p1=0.05, p2=0.1)[source]

Build a synthetic F_C, F_M, F_N, and r of length T TODO: docs

allensdk.brain_observatory.r_neuropil.validate_with_synthetic_F(T, N)[source]

Compute N synthetic traces of length T with known values of r, then estimate r. TODO: docs

allensdk.brain_observatory.roi_masks module

class allensdk.brain_observatory.roi_masks.Mask(image_w, image_h, label, mask_group)[source]

Bases: object

Abstract class to represent image segmentation mask. Its two main subclasses are RoiMask and NeuropilMask. The former represents the mask of a region of interest (ROI), such as a cell observed in 2-photon imaging. The latter represents the neuropil around that cell, and is useful when subtracting the neuropil signal from the measured ROI signal.

This class should not be instantiated directly.

Parameters:

image_w: integer

Width of image that ROI resides in

image_h: integer

Height of image that ROI resides in

label: text

User-defined text label to identify mask

mask_group: integer

User-defined number to help put masks into different categories

get_mask_plane()[source]

Returns mask content on full-size image plane

Returns:numpy 2D array [img_rows][img_cols]
init_by_pixels(border, pix_list)[source]

Initialize mask using a list of mask pixels

Parameters:

border: float[4]

Coordinates defining useable area of image. See create_roi_mask()

pix_list: integer[][2]

List of pixel coordinates (x,y) that define the mask

class allensdk.brain_observatory.roi_masks.NeuropilMask(w, h, label, mask_group)[source]

Bases: allensdk.brain_observatory.roi_masks.Mask

init_by_mask(border, array)[source]

Initialize mask using spatial mask

Parameters:

border: float[4]

Coordinates defining useable area of image. See create_roi_mask().

array: integer[image height][image width]

Image-sized array that describes the mask. Active parts of the mask should have values >0. Background pixels must be zero

class allensdk.brain_observatory.roi_masks.RoiMask(image_w, image_h, label, mask_group)[source]

Bases: allensdk.brain_observatory.roi_masks.Mask

init_by_mask(border, array)[source]

Initialize mask using spatial mask

Parameters:

border: float[4]

Coordinates defining useable area of image. See create_roi_mask().

roi_mask: integer[image height][image width]

Image-sized array that describes the mask. Active parts of the mask should have values >0. Background pixels must be zero

allensdk.brain_observatory.roi_masks.calculate_roi_and_neuropil_traces(movie_h5, roi_mask_list, motion_border)[source]

get roi and neuropil masks

allensdk.brain_observatory.roi_masks.calculate_traces(stack, mask_list, block_size=100)[source]

Calculates the average response of the specified masks in the image stack

Parameters:

stack: float[image height][image width]

Image stack that masks are applied to

mask_list: list<Mask>

List of masks

Returns:

float[number masks][number frames]

This is the average response for each Mask in each image frame

allensdk.brain_observatory.roi_masks.create_neuropil_mask(roi, border, combined_binary_mask, label=None)[source]

Conveninece function to create and initializes a Neuropil mask. Neuropil masks are defined as the region around an ROI, up to 13 pixels out, that does not include other ROIs

Parameters:

roi: RoiMask object

The ROI that the neuropil masks will be based on

border: float[4]

Coordinates defining useable area of image. See create_roi_mask().

combined_binary_mask

List of pixel coordinates (x,y) that define the mask

combined_binary_mask: integer[image_h][image_w]

Image-sized array that shows the position of all ROIs in the image. ROI masks should have a value of one. Background pixels must be zero. In other words, ithe combined_binary_mask is a bitmap union of all ROI masks

label: text

User-defined text label to identify the mask

Returns:

NeuropilMask object

allensdk.brain_observatory.roi_masks.create_roi_mask(image_w, image_h, border, pix_list=None, roi_mask=None, label=None, mask_group=-1)[source]

Conveninece function to create and initializes an RoiMask

Parameters:

image_w: integer

Width of image that ROI resides in

image_h: integer

Height of image that ROI resides in

border: float[4]

Coordinates defining useable area of image. If the entire image is usable, and masks are valid anywhere in the image, this should be [(image_w-1), 0, (image_h-1), 0]. The following constants help describe the array order:

RIGHT_SHIFT = 0

LEFT_SHIFT = 1

DOWN_SHIFT = 2

UP_SHIFT = 3

When parts of the image are unusable, for example due motion correction shifting of different image frames, the border array should store the usable image area

pix_list: integer[][2]

List of pixel coordinates (x,y) that define the mask

roi_mask: integer[image_h][image_w]

Image-sized array that describes the mask. Active parts of the mask should have values >0. Background pixels must be zero

label: text

User-defined text label to identify mask

mask_group: integer

User-defined number to help put masks into different categories

Returns:

RoiMask object

allensdk.brain_observatory.roi_masks.create_roi_mask_array(rois)[source]

Create full image mask array from list of RoiMasks.

Parameters:

rois: list<RoiMask>

List of roi masks.

Returns:

np.ndarray: NxWxH array

Boolean array of of len(rois) image masks.

allensdk.brain_observatory.session_analysis module

class allensdk.brain_observatory.session_analysis.SessionAnalysis(nwb_path, save_path)[source]

Bases: object

Run all of the stimulus-specific analyses associated with a single experiment session.

Parameters:

nwb_path: string, path to NWB file

save_path: string, path to HDF5 file to store outputs. Recommended NOT to modify the NWB file.

append_experiment_metrics(metrics)[source]

Extract stimulus-agnostic metrics from an experiment into a dictionary

append_metadata(df)[source]

Append the metadata fields from the NWB file as columns to a pd.DataFrame

append_metrics_drifting_grating(metrics, dg)[source]

Extract metrics from the DriftingGratings peak response table into a dictionary.

append_metrics_locally_sparse_noise(metrics, lsn)[source]

Extract metrics from the LocallySparseNoise peak response table into a dictionary.

append_metrics_natural_movie_one(metrics, nma)[source]

Extract metrics from the NaturalMovie(stimulus_info.NATURAL_MOVIE_ONE) peak response table into a dictionary.

append_metrics_natural_movie_three(metrics, nma)[source]

Extract metrics from the NaturalMovie(stimulus_info.NATURAL_MOVIE_THREE) peak response table into a dictionary.

append_metrics_natural_movie_two(metrics, nma)[source]

Extract metrics from the NaturalMovie(stimulus_info.NATURAL_MOVIE_TWO) peak response table into a dictionary.

append_metrics_natural_scene(metrics, ns)[source]

Extract metrics from the NaturalScenes peak response table into a dictionary.

append_metrics_static_grating(metrics, sg)[source]

Extract metrics from the StaticGratings peak response table into a dictionary.

save_session_a(dg, nm1, nm3, peak)[source]

Save the output of session A analysis to self.save_path.

Parameters:

dg: DriftingGratings instance

nm1: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_ONE

nm3: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_THREE

peak: pd.DataFrame

The combined peak response property table created in self.session_a().

save_session_b(sg, nm1, ns, peak)[source]

Save the output of session B analysis to self.save_path.

Parameters:

sg: StaticGratings instance

nm1: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_ONE

ns: NaturalScenes instance

peak: pd.DataFrame

The combined peak response property table created in self.session_b().

save_session_c(lsn, nm1, nm2, peak)[source]

Save the output of session C analysis to self.save_path.

Parameters:

lsn: LocallySparseNoise instance

nm1: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_ONE

nm2: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_TWO

peak: pd.DataFrame

The combined peak response property table created in self.session_c().

save_session_c2(lsn4, lsn8, nm1, nm2, peak)[source]

Save the output of session C2 analysis to self.save_path.

Parameters:

lsn4: LocallySparseNoise instance

This LocallySparseNoise instance should have been created with self.stimulus = stimulus_info.LOCALLY_SPARSE_NOISE_4DEG.

lsn8: LocallySparseNoise instance

This LocallySparseNoise instance should have been created with self.stimulus = stimulus_info.LOCALLY_SPARSE_NOISE_8DEG.

nm1: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_ONE

nm2: NaturalMovie instance

This NaturalMovie instance should have been created with movie_name = stimulus_info.NATURAL_MOVIE_TWO

peak: pd.DataFrame

The combined peak response property table created in self.session_c2().

session_a(plot_flag=False, save_flag=True)[source]

Run stimulus-specific analysis for natural movie one, natural movie three, and drifting gratings. The input NWB be for a stimulus_info.THREE_SESSION_A experiment.

Parameters:

plot_flag: bool

Whether to generate brain_observatory_plotting work plots after running analysis.

save_flag: bool

Whether to save the output of analysis to self.save_path upon completion.

session_b(plot_flag=False, save_flag=True)[source]

Run stimulus-specific analysis for natural scenes, static gratings, and natural movie one. The input NWB be for a stimulus_info.THREE_SESSION_B experiment.

Parameters:

plot_flag: bool

Whether to generate brain_observatory_plotting work plots after running analysis.

save_flag: bool

Whether to save the output of analysis to self.save_path upon completion.

session_c(plot_flag=False, save_flag=True)[source]

Run stimulus-specific analysis for natural movie one, natural movie two, and locally sparse noise. The input NWB be for a stimulus_info.THREE_SESSION_C experiment.

Parameters:

plot_flag: bool

Whether to generate brain_observatory_plotting work plots after running analysis.

save_flag: bool

Whether to save the output of analysis to self.save_path upon completion.

session_c2(plot_flag=False, save_flag=True)[source]

Run stimulus-specific analysis for locally sparse noise (4 deg.), locally sparse noise (8 deg.), natural movie one, and natural movie two. The input NWB be for a stimulus_info.THREE_SESSION_C2 experiment.

Parameters:

plot_flag: bool

Whether to generate brain_observatory_plotting work plots after running analysis.

save_flag: bool

Whether to save the output of analysis to self.save_path upon completion.

verify_roi_lists_equal(roi1, roi2)[source]

TODO: replace this with simpler numpy comparisons

allensdk.brain_observatory.session_analysis.main()[source]
allensdk.brain_observatory.session_analysis.multi_dataframe_merge(dfs)[source]

merge a number of pd.DataFrames into a single dataframe on their index columns. If any columns are duplicated, prefer the first occuring instance of the column

allensdk.brain_observatory.session_analysis.run_session_analysis(nwb_path, save_path, plot_flag=False, save_flag=True)[source]

Inspect an NWB file to determine which experiment session was run and compute all stimulus-specific analyses.

Parameters:

nwb_path: string

Path to NWB file.

save_path: string

path to save results. Recommended NOT to use NWB file.

plot_flag: bool

Whether to save brain_observatory_plotting work plots.

save_flag: bool

Whether to save results to save_path.

allensdk.brain_observatory.static_gratings module

class allensdk.brain_observatory.static_gratings.StaticGratings(data_set, **kwargs)[source]

Bases: allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis

Perform tuning analysis specific to static gratings stimulus.

Parameters:data_set: BrainObservatoryNwbDataSet object
extralength
static from_analysis_file(data_set, analysis_file)[source]
get_noise_correlation(corr='spearman')[source]
get_peak()[source]

Computes metrics related to each cell’s peak response condition.

Returns:

Panda data frame with the following fields (_sg suffix is

for static grating):

  • ori_sg (orientation)
  • sf_sg (spatial frequency)
  • phase_sg
  • response_variability_sg
  • osi_sg (orientation selectivity index)
  • peak_dff_sg (peak dF/F)
  • ptest_sg
  • time_to_peak_sg
get_representational_similarity(corr='spearman')[source]
get_response()[source]

Computes the mean response for each cell to each stimulus condition. Return is a (# orientations, # spatial frequencies, # phasees, # cells, 3) np.ndarray. The final dimension contains the mean response to the condition (index 0), standard error of the mean of the response to the condition (index 1), and the number of trials with a significant response (p < 0.05) to that condition (index 2).

Returns:Numpy array storing mean responses.
get_signal_correlation(corr='spearman')[source]
interlength
number_ori
number_phase
number_sf
open_fan_plot(cell_specimen_id=None, include_labels=False, cell_index=None)[source]
orivals
phasevals
plot_orientation_selectivity(si_range=[0, 1.5], n_hist_bins=50, color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_preferred_orientation(include_labels=False, si_range=[0, 1.5], color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_preferred_spatial_frequency(si_range=[0, 1.5], color='#ccccdd', p_value_max=0.05, peak_dff_min=3)[source]
plot_time_to_peak(p_value_max=0.05, color_map=<matplotlib.colors.LinearSegmentedColormap object>)[source]
populate_stimulus_table()[source]
reshape_response_array()[source]
Returns:response array in cells x stim conditions x repetition for noise correlations

this is a re-organization of the mean sweep response table

sfvals
sweeplength

allensdk.brain_observatory.stimulus_analysis module

class allensdk.brain_observatory.stimulus_analysis.StimulusAnalysis(data_set)[source]

Bases: object

Base class for all response analysis code. Subclasses are responsible for computing metrics and traces relevant to a particular stimulus. The base class contains methods for organizing sweep responses row of a stimulus stable (get_sweep_response). Subclasses implement the get_response method, computes the mean sweep response to all sweeps for a each stimulus condition.

Parameters:

data_set: BrainObservatoryNwbDataSet instance

speed_tuning: boolean, deprecated

Whether or not to compute speed tuning histograms

acquisition_rate
binned_cells_sp
binned_cells_vis
binned_dx_sp
binned_dx_vis
cell_id
celltraces
dfftraces
dxcm
dxtime
get_fluorescence()[source]
get_peak()[source]

Implemented by subclasses.

get_response()[source]

Implemented by subclasses.

get_speed_tuning(binsize)[source]

Calculates speed tuning, spontaneous versus visually driven. The return is a 5-tuple of speed and dF/F histograms.

binned_dx_sp: (bins,2) np.ndarray of running speeds binned during spontaneous activity stimulus. The first bin contains all speeds below 1 cm/s. Dimension 0 is mean running speed in the bin. Dimension 1 is the standard error of the mean.

binned_cells_sp: (bins,2) np.ndarray of fluorescence during spontaneous activity stimulus. First bin contains all data for speeds below 1 cm/s. Dimension 0 is mean fluorescence in the bin. Dimension 1 is the standard error of the mean.

binned_dx_vis: (bins,2) np.ndarray of running speeds outside of spontaneous activity stimulus. The first bin contains all speeds below 1 cm/s. Dimension 0 is mean running speed in the bin. Dimension 1 is the standard error of the mean.

binned_cells_vis: np.ndarray of fluorescence outside of spontaneous activity stimulu. First bin contains all data for speeds below 1 cm/s. Dimension 0 is mean fluorescence in the bin. Dimension 1 is the standard error of the mean.

peak_run: pd.DataFrame of speed-related properties of a cell.

Returns:tuple: binned_dx_sp, binned_cells_sp, binned_dx_vis, binned_cells_vis, peak_run
get_sweep_response()[source]

Calculates the response to each sweep in the stimulus table for each cell and the mean response. The return is a 3-tuple of:

  • sweep_response: pd.DataFrame of response dF/F traces organized by cell (column) and sweep (row)
  • mean_sweep_response: mean values of the traces returned in sweep_response
  • pval: p value from 1-way ANOVA comparing response during sweep to response prior to sweep
Returns:3-tuple: sweep_response, mean_sweep_response, pval
mean_sweep_response
numbercells
peak
peak_run
plot_representational_similarity(repsim, stimulus=False)[source]
plot_running_speed_histogram(xlim=None, nbins=None)[source]
plot_speed_tuning(cell_specimen_id=None, cell_index=None, evoked_color='#b30000', spontaneous_color='#0000b3')[source]
populate_stimulus_table()[source]

Implemented by subclasses.

pval
response
roi_id
row_from_cell_id(csid=None, idx=None)[source]
stim_table
sweep_response
timestamps

allensdk.brain_observatory.stimulus_info module

class allensdk.brain_observatory.stimulus_info.BinaryIntervalSearchTree(search_list)[source]

Bases: object

add(input_list, tmp=None)[source]
static from_df(input_df)[source]
search(fi, tmp=None)[source]
class allensdk.brain_observatory.stimulus_info.BrainObservatoryMonitor(experiment_geometry=None)[source]

Bases: allensdk.brain_observatory.stimulus_info.Monitor

http://help.brain-map.org/display/observatory/Documentation?preview=/10616846/10813485/VisualCoding_VisualStimuli.pdf https://www.cnet.com/products/asus-pa248q/specs/

grating_to_screen(phase, spatial_frequency, orientation)[source]
lsn_image_to_screen(img, **kwargs)[source]
pixels_to_visual_degrees(n, **kwargs)[source]
warp_image(img, **kwargs)[source]
class allensdk.brain_observatory.stimulus_info.ExperimentGeometry(distance, mon_height_cm, mon_width_cm, mon_res, eyepoint)[source]

Bases: object

generate_warp_coordinates()[source]
warp_coordinates
class allensdk.brain_observatory.stimulus_info.Monitor(n_pixels_r, n_pixels_c, panel_size, spatial_unit)[source]

Bases: object

aspect_ratio
get_mask()[source]
grating_to_screen(phase, spatial_frequency, orientation, distance_from_monitor, p2p_amp=256, baseline=127)[source]
height
lsn_image_to_screen(img, stimulus_type, origin='lower', background_color=127)[source]
map_stimulus(source_stimulus_coordinate, source_stimulus_type, target_stimulus_type)[source]
natural_movie_image_to_screen(img, origin='lower')[source]
natural_scene_image_to_screen(img, origin='lower')[source]
panel_size
pixel_size
pixels_to_visual_degrees(n, distance_from_monitor, small_angle_approximation=True)[source]
set_spatial_unit(new_unit)[source]
show_image(img, ax=None, show=True, mask=False, warp=False, origin='lower')[source]
spatial_frequency_to_pix_per_cycle(spatial_frequency, distance_from_monitor)[source]
width
class allensdk.brain_observatory.stimulus_info.StimulusSearch(nwb_dataset)[source]

Bases: object

search(*args, **kwargs)[source]
allensdk.brain_observatory.stimulus_info.all_stimuli()[source]

Return a list of all stimuli in the data set

allensdk.brain_observatory.stimulus_info.get_spatial_grating(height=None, aspect_ratio=None, ori=None, pix_per_cycle=None, phase=None, p2p_amp=2, baseline=0)[source]
allensdk.brain_observatory.stimulus_info.get_spatio_temporal_grating(t, temporal_frequency=None, **kwargs)[source]
allensdk.brain_observatory.stimulus_info.lsn_coordinate_to_monitor_coordinate(lsn_coordinate, monitor_shape, stimulus_type)[source]
allensdk.brain_observatory.stimulus_info.make_display_mask(display_shape=(1920, 1200))[source]

Build a display-shaped mask that indicates which pixels are on screen after warping the stimulus.

allensdk.brain_observatory.stimulus_info.map_monitor_coordinate_to_stimulus_coordinate(monitor_coordinate, monitor_shape, stimulus_type)[source]
allensdk.brain_observatory.stimulus_info.map_monitor_coordinate_to_template_coordinate(monitor_coord, monitor_shape, template_shape)[source]
allensdk.brain_observatory.stimulus_info.map_stimulus(source_stimulus_coordinate, source_stimulus_type, target_stimulus_type, monitor_shape)[source]
allensdk.brain_observatory.stimulus_info.map_stimulus_coordinate_to_monitor_coordinate(template_coordinate, monitor_shape, stimulus_type)[source]
allensdk.brain_observatory.stimulus_info.map_template_coordinate_to_monitor_coordinate(template_coord, monitor_shape, template_shape)[source]
allensdk.brain_observatory.stimulus_info.mask_stimulus_template(template_display_coords, template_shape, display_mask=None, threshold=1.0)[source]

Build a mask for a stimulus template of a given shape and display coordinates that indicates which part of the template is on screen after warping.

Parameters:

template_display_coords: list

list of (x,y) display coordinates

template_shape: tuple

(width,height) of the display template

display_mask: np.ndarray

boolean 2D mask indicating which display coordinates are on screen after warping.

threshold: float

Fraction of pixels associated with a template display coordinate that should remain on screen to count as belonging to the mask.

Returns:

tuple: (template mask, pixel fraction)

allensdk.brain_observatory.stimulus_info.monitor_coordinate_to_lsn_coordinate(monitor_coordinate, monitor_shape, stimulus_type)[source]
allensdk.brain_observatory.stimulus_info.monitor_coordinate_to_natural_movie_coordinate(monitor_coordinate, monitor_shape)[source]
allensdk.brain_observatory.stimulus_info.natural_movie_coordinate_to_monitor_coordinate(natural_movie_coordinate, monitor_shape)[source]
allensdk.brain_observatory.stimulus_info.natural_scene_coordinate_to_monitor_coordinate(natural_scene_coordinate, monitor_shape)[source]
allensdk.brain_observatory.stimulus_info.rotate(X, Y, theta)[source]
allensdk.brain_observatory.stimulus_info.sessions_with_stimulus(stimulus)[source]

Return the names of the sessions that contain a given stimulus.

allensdk.brain_observatory.stimulus_info.stimuli_in_session(session)[source]

Return a list what stimuli are available in a given session.

Parameters:

session: string

Must be one of: [stimulus_info.THREE_SESSION_A, stimulus_info.THREE_SESSION_B, stimulus_info.THREE_SESSION_C, stimulus_info.THREE_SESSION_C2]

allensdk.brain_observatory.stimulus_info.warp_stimulus_coords(vertices, distance=15.0, mon_height_cm=32.5, mon_width_cm=51.0, mon_res=(1920, 1200), eyepoint=(0.5, 0.5))[source]

For a list of screen vertices, provides a corresponding list of texture coordinates.

Parameters:

vertices: numpy.ndarray

[[x0,y0], [x1,y1], ...] A set of vertices to convert to texture positions.

distance: float

distance from the monitor in cm.

mon_height_cm: float

monitor height in cm

mon_width_cm: float

monitor width in cm

mon_res: tuple

monitor resolution (x,y)

eyepoint: tuple

Returns:

np.ndarray

x,y coordinates shaped like the input that describe what pixel coordinates are displayed an the input coordinates after warping the stimulus.

Module contents