bmtk.utils.brain_observatory package#
Subpackages#
- bmtk.utils.brain_observatory.behavior package
- bmtk.utils.brain_observatory.ecephys package
- Submodules
- bmtk.utils.brain_observatory.ecephys.ecephys_project_cache module
EcephysProjectCacheEcephysProjectCache.CHANNELS_KEYEcephysProjectCache.MANIFEST_VERSIONEcephysProjectCache.NATURAL_MOVIE_DIR_KEYEcephysProjectCache.NATURAL_MOVIE_KEYEcephysProjectCache.NATURAL_SCENE_DIR_KEYEcephysProjectCache.NATURAL_SCENE_KEYEcephysProjectCache.PROBES_KEYEcephysProjectCache.PROBE_LFP_NWB_KEYEcephysProjectCache.SESSIONS_KEYEcephysProjectCache.SESSION_ANALYSIS_METRICS_KEYEcephysProjectCache.SESSION_DIR_KEYEcephysProjectCache.SESSION_NWB_KEYEcephysProjectCache.SUPPRESS_FROM_PROBESEcephysProjectCache.TYPEWISE_ANALYSIS_METRICS_KEYEcephysProjectCache.UNITS_KEYEcephysProjectCache.add_manifest_paths()EcephysProjectCache.from_warehouse()EcephysProjectCache.get_channels()EcephysProjectCache.get_probes()EcephysProjectCache.get_session_data()EcephysProjectCache.get_unit_analysis_metrics_for_session()
EcephysProjectWarehouseApiEcephysProjectWarehouseApi.default()EcephysProjectWarehouseApi.get_channels()EcephysProjectWarehouseApi.get_probe_lfp_data()EcephysProjectWarehouseApi.get_probes()EcephysProjectWarehouseApi.get_rig_metadata()EcephysProjectWarehouseApi.get_session_data()EcephysProjectWarehouseApi.get_sessions()EcephysProjectWarehouseApi.get_unit_analysis_metrics()EcephysProjectWarehouseApi.get_units()
EcephysSession
- Module contents
Submodules#
bmtk.utils.brain_observatory.brain_observatory_cache module#
- class bmtk.utils.brain_observatory.brain_observatory_cache.BrainObservatoryApi(base_uri=None, datacube_uri=None)[source]#
Bases:
RmaTemplate- NWB_FILE_TYPE = 'NWBOphys'#
- OPHYS_ANALYSIS_FILE_TYPE = 'OphysExperimentCellRoiMetricsFile'#
- OPHYS_EVENTS_FILE_TYPE = 'ObservatoryEventsFile'#
- rma_templates = {'brain_observatory_queries': [{'count': False, 'criteria_params': [], 'description': 'see name', 'model': 'IsiExperiment', 'name': 'list_isi_experiments', 'num_rows': 'all'}, {'count': False, 'criteria': '[id$in{{ isi_experiment_ids }}]', 'criteria_params': ['isi_experiment_ids'], 'description': 'see name', 'include': 'experiment_container(ophys_experiments,targeted_structure)', 'model': 'IsiExperiment', 'name': 'isi_experiment_by_ids', 'num_rows': 'all'}, {'count': False, 'criteria': '{% if ophys_experiment_ids is defined %}[id$in{{ ophys_experiment_ids }}]{%endif%}', 'criteria_params': ['ophys_experiment_ids'], 'description': 'see name', 'include': 'experiment_container,well_known_files(well_known_file_type),targeted_structure,specimen(donor(age,transgenic_lines))', 'model': 'OphysExperiment', 'name': 'ophys_experiment_by_ids', 'num_rows': 'all'}, {'count': False, 'criteria': '[attachable_id$eq{{ ophys_experiment_id }}],well_known_file_type[name$eqNWBOphys]', 'criteria_params': ['ophys_experiment_id'], 'description': 'see name', 'model': 'WellKnownFile', 'name': 'ophys_experiment_data', 'num_rows': 'all'}, {'count': False, 'criteria': '[attachable_id$eq{{ ophys_experiment_id }}],well_known_file_type[name$eqOphysExperimentCellRoiMetricsFile]', 'criteria_params': ['ophys_experiment_id'], 'description': 'see name', 'model': 'WellKnownFile', 'name': 'ophys_analysis_file', 'num_rows': 'all'}, {'count': False, 'criteria': '[attachable_id$eq{{ ophys_experiment_id }}],well_known_file_type[name$eqObservatoryEventsFile]', 'criteria_params': ['ophys_experiment_id'], 'description': 'see name', 'model': 'WellKnownFile', 'name': 'ophys_events_file', 'num_rows': 'all'}, {'count': False, 'criteria': '[api_class_name$eq{{ api_class_name }}]', 'criteria_params': ['api_class_name'], 'description': 'see name', 'model': 'ApiColumnDefinition', 'name': 'column_definitions', 'num_rows': 'all'}, {'count': False, 'description': 'see name', 'model': 'ApiColumnDefinition', 'name': 'column_definition_class_names', 'num_rows': 'all', 'only': ['api_class_name']}, {'count': False, 'criteria': '{% if stimulus_mapping_ids is defined %}[id$in{{ stimulus_mapping_ids }}]{%endif%}', 'criteria_params': ['stimulus_mapping_ids'], 'description': 'see name', 'model': 'ApiCamStimulusMapping', 'name': 'stimulus_mapping', 'num_rows': 'all'}, {'count': False, 'criteria': '{% if experiment_container_ids is defined %}[id$in{{ experiment_container_ids }}]{%endif%}', 'criteria_params': ['experiment_container_ids'], 'description': 'see name', 'include': 'ophys_experiments,isi_experiment,specimen(donor(conditions,age,transgenic_lines)),targeted_structure', 'model': 'ExperimentContainer', 'name': 'experiment_container', 'num_rows': 'all'}, {'count': False, 'criteria': '{% if experiment_container_metric_ids is defined %}[id$in{{ experiment_container_metric_ids }}]{%endif%}', 'criteria_params': ['experiment_container_metric_ids'], 'description': 'see name', 'model': 'ApiCamExperimentContainerMetric', 'name': 'experiment_container_metric', 'num_rows': 'all'}, {'criteria': '{% if cell_specimen_ids is defined %}[cell_specimen_id$in{{ cell_specimen_ids }}]{%endif%}', 'criteria_params': ['cell_specimen_ids'], 'description': 'see name', 'model': 'ApiCamCellMetric', 'name': 'cell_metric'}, {'count': False, 'criteria': '[id$eq{{ mapping_table_id }}],well_known_file_type[name$eqOphysCellSpecimenIdMapping]', 'criteria_params': ['mapping_table_id'], 'description': 'see name', 'model': 'WellKnownFile', 'name': 'cell_specimen_id_mapping_table', 'num_rows': 'all'}, {'count': False, 'criteria': '[attachable_id$eq{{ ophys_session_id }}],well_known_file_type[name$eqEyeDlcScreenMapping]', 'criteria_params': ['ophys_session_id'], 'description': 'h5 file containing mouse eye gaze mapped onto screen coordinates (as well as pupil and eye sizes)', 'model': 'WellKnownFile', 'name': 'eye_gaze_mapping_file', 'num_rows': 'all'}, {'count': False, 'criteria': 'well_known_file_type[name$eqEyeDlcScreenMapping]', 'description': 'Get a list of dictionaries for all eye mapping wkfs', 'model': 'WellKnownFile', 'name': 'all_eye_mapping_files', 'num_rows': 'all'}]}#
- class bmtk.utils.brain_observatory.brain_observatory_cache.BrainObservatoryCache(cache=True, manifest_file=None, base_uri=None, api=None)[source]#
Bases:
Cache- ANALYSIS_DATA_KEY = 'ANALYSIS_DATA'#
- CELL_SPECIMENS_KEY = 'CELL_SPECIMENS'#
- EVENTS_DATA_KEY = 'EVENTS_DATA'#
- EXPERIMENTS_KEY = 'EXPERIMENTS'#
- EXPERIMENT_CONTAINERS_KEY = 'EXPERIMENT_CONTAINERS'#
- EXPERIMENT_DATA_KEY = 'EXPERIMENT_DATA'#
- EYE_GAZE_DATA_KEY = 'EYE_GAZE_DATA'#
- MANIFEST_VERSION = '1.3'#
- STIMULUS_MAPPINGS_KEY = 'STIMULUS_MAPPINGS'#
- build_manifest(file_name)[source]#
Construct a manifest for this Cache class and save it in a file.
- Parameters:
- file_name: string
File location to save the manifest.
- get_ophys_experiment_data(ophys_experiment_id, file_name=None)[source]#
Download the NWB file for an ophys_experiment (if it hasn’t already been downloaded) and return a data accessor object.
- Parameters:
- file_name: string
File name to save/read the data set. If file_name is None, the file_name will be pulled out of the manifest. If caching is disabled, no file will be saved. Default is None.
- ophys_experiment_id: integer
id of the ophys_experiment to retrieve
- Returns:
- BrainObservatoryNwbDataSet
- class bmtk.utils.brain_observatory.brain_observatory_cache.BrainObservatoryNwbDataSet(nwb_file)[source]#
Bases:
object- FILE_METADATA_MAPPING = {'age': 'general/subject/age', 'device_string': 'general/devices/2-photon microscope', 'excitation_lambda': 'general/optophysiology/imaging_plane_1/excitation_lambda', 'experiment_container_id': 'general/experiment_container_id', 'fov': 'general/fov', 'generated_by': 'general/generated_by', 'genotype': 'general/subject/genotype', 'imaging_depth': 'general/optophysiology/imaging_plane_1/imaging depth', 'indicator': 'general/optophysiology/imaging_plane_1/indicator', 'ophys_experiment_id': 'general/session_id', 'session_start_time': 'session_start_time', 'session_type': 'general/session_type', 'sex': 'general/subject/sex', 'specimen_name': 'general/specimen_name', 'targeted_structure': 'general/optophysiology/imaging_plane_1/location'}#
- MOTION_CORRECTION_DATASETS = ['MotionCorrection/2p_image_series/xy_translations', 'MotionCorrection/2p_image_series/xy_translation']#
- PIPELINE_DATASET = 'brain_observatory_pipeline'#
- STIMULUS_TABLE_TYPES = {'abstract_feature_series': ['drifting_gratings', 'static_gratings'], 'indexed_time_series': ['natural_scenes', 'locally_sparse_noise', 'locally_sparse_noise_4deg', 'locally_sparse_noise_8deg'], 'repeated_indexed_time_series': ['natural_movie_one', 'natural_movie_two', 'natural_movie_three']}#
- SUPPORTED_PIPELINE_VERSION = '3.0'#
- get_cell_specimen_ids()[source]#
Returns an array of cell IDs for all cells in the file
- Returns:
- cell specimen IDs: list
- get_cell_specimen_indices(cell_specimen_ids)[source]#
Given a list of cell specimen ids, return their index based on their order in this file.
- Parameters:
- cell_specimen_ids: list of cell specimen ids
- get_corrected_fluorescence_traces(cell_specimen_ids=None)[source]#
Returns an array of demixed and neuropil-corrected fluorescence traces for all ROIs and the timestamps for each datapoint
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then all are returned
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- traces: 2D numpy array
Corrected fluorescence traces for each cell
- get_demixed_traces(cell_specimen_ids=None)[source]#
Returns an array of demixed fluorescence traces for all ROIs and the timestamps for each datapoint
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then all are returned
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- traces: 2D numpy array
Demixed fluorescence traces for each cell
- get_dff_traces(cell_specimen_ids=None)[source]#
Returns an array of dF/F traces for all ROIs and the timestamps for each datapoint
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return data for. If this is None (default) then all are returned
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- dF/F: 2D numpy array
dF/F values for each cell
- get_fluorescence_timestamps()[source]#
Returns an array of timestamps in seconds for the fluorescence traces
- get_fluorescence_traces(cell_specimen_ids=None)[source]#
Returns an array of fluorescence traces for all ROI and the timestamps for each datapoint
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then all are returned
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- traces: 2D numpy array
Fluorescence traces for each cell
- get_locally_sparse_noise_stimulus_template(stimulus, mask_off_screen=True)[source]#
Return an array of the stimulus template for the specified stimulus.
- Parameters:
- stimulus: string
- Which locally sparse noise stimulus to retrieve. Must be one of:
stimulus_info.LOCALLY_SPARSE_NOISE stimulus_info.LOCALLY_SPARSE_NOISE_4DEG stimulus_info.LOCALLY_SPARSE_NOISE_8DEG
- mask_off_screen: boolean
Set off-screen regions of the stimulus to LocallySparseNoise.LSN_OFF_SCREEN.
- Returns:
- tuple: (template, off-screen mask)
- get_max_projection()[source]#
Returns the maximum projection image for the 2P movie.
- Returns:
- max projection: np.ndarray
- get_metadata()[source]#
Returns a dictionary of meta data associated with each experiment, including Cre line, specimen number, visual area imaged, imaging depth
- Returns:
- metadata: dictionary
- get_motion_correction()[source]#
Returns a Panda DataFrame containing the x- and y- translation of each image used for image alignment
- get_neuropil_r(cell_specimen_ids=None)[source]#
Returns a scalar value of r for neuropil correction of flourescence traces
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then results for all are returned
- Returns:
- r: 1D numpy array, len(r)=len(cell_specimen_ids)
Scalar for neuropil subtraction for each cell
- get_neuropil_traces(cell_specimen_ids=None)[source]#
Returns an array of neuropil fluorescence traces for all ROIs and the timestamps for each datapoint
- Parameters:
- cell_specimen_ids: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then all are returned
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- traces: 2D numpy array
Neuropil fluorescence traces for each cell
- get_pupil_location(as_spherical=True)[source]#
Returns the x, y pupil location.
- Parameters:
- as_sphericalbool
Whether to return the location as spherical (default) or not. If true, the result is altitude and azimuth in degrees, otherwise it is x, y in centimeters. (0,0) is the center of the monitor.
- Returns:
- (timestamps, location)
Timestamps is an (Nx1) array of timestamps in seconds. Location is an (Nx2) array of spatial location.
- get_pupil_size()[source]#
Returns the pupil area in pixels.
- Returns:
- (timestamps, areas)
Timestamps is an (Nx1) array of timestamps in seconds. Areas is an (Nx1) array of pupil areas in pixels.
- get_roi_mask(cell_specimen_ids=None)[source]#
Returns an array of all the ROI masks
- Parameters:
- cell specimen IDs: list or array (optional)
List of cell IDs to return traces for. If this is None (default) then all are returned
- Returns:
- List of ROI_Mask objects
- get_roi_mask_array(cell_specimen_ids=None)[source]#
Return a numpy array containing all of the ROI masks for requested cells. If cell_specimen_ids is omitted, return all masks.
- Parameters:
- cell_specimen_ids: list
List of cell specimen ids. Default None.
- Returns:
- np.ndarray: NxWxH array, where N is number of cells
- get_session_type()[source]#
Returns the type of experimental session, presently one of the following: three_session_A, three_session_B, three_session_C
- Returns:
- session type: string
- get_stimulus_epoch_table()[source]#
Returns a pandas dataframe that summarizes the stimulus epoch duration for each acquisition time index in the experiment
- Parameters:
- None
- Returns:
- timestamps: 2D numpy array
Timestamp for each fluorescence sample
- traces: 2D numpy array
Fluorescence traces for each cell
- get_stimulus_table(stimulus_name)[source]#
Return a stimulus table given a stimulus name
Notes
For more information, see: http://help.brain-map.org/display/observatory/Documentation?preview=/10616846/10813485/VisualCoding_VisualStimuli.pdf
- get_stimulus_template(stimulus_name)[source]#
Return an array of the stimulus template for the specified stimulus.
- Parameters:
- stimulus_name: string
Must be one of the strings returned by list_stimuli().
- Returns:
- stimulus table: pd.DataFrame
- list_stimuli()[source]#
Return a list of the stimuli presented in the experiment.
- Returns:
- stimuli: list of strings
- property number_of_cells#
Number of cells in the experiment
- property stimulus_search#
bmtk.utils.brain_observatory.cache module#
bmtk.utils.brain_observatory.manifest module#
- class bmtk.utils.brain_observatory.manifest.Manifest(config=None, relative_base_dir='.', version=None)[source]#
Bases:
object- DIR = 'dir'#
- DIRNAME = 'dir_name'#
- FILE = 'file'#
- VERSION = 'manifest_version'#
- add_file(file_key, file_name, dir_key=None, path_format=None)[source]#
Insert a new file entry.
- Parameters:
- file_keystring
Reference to the entry.
- file_namestring
Subtitutions of the %s, %d style allowed.
- dir_keystring
Reference to the parent directory entry.
- path_formatstring, optional
File type for further parsing.
- add_path(key, path, path_type='dir', absolute=True, path_format=None, parent_key=None)[source]#
Insert a new entry.
- Parameters:
- keystring
Identifier for referencing the entry.
- pathstring
Specification for a path using %s, %d style substitution.
- path_typestring enumeration
‘dir’ (default) or ‘file’
- absoluteboolean
Is the spec relative to the process current directory.
- path_formatstring, optional
Indicate a known file type for further parsing.
- parent_keystring
Refer to another entry.
- get_path(path_key, *args)[source]#
Retrieve an entry with substitutions.
- Parameters:
- path_keystring
Refer to the entry to retrieve.
- argsany types, optional
arguments to be substituted into the path spec for %s, %d, etc.
- Returns:
- string
Path with parent structure and substitutions applied.
- load_config(config, version=None)[source]#
Load paths into the manifest from an Allen SDK config section.
- Parameters:
- configConfig
Manifest section of an Allen SDK config.
bmtk.utils.brain_observatory.rma_engine module#
- class bmtk.utils.brain_observatory.rma_engine.HttpEngine(scheme: str, host: str, timeout: float = 1200, chunksize: int = 10240, **kwargs)[source]#
Bases:
object
- class bmtk.utils.brain_observatory.rma_engine.RmaEngine(scheme, host, rma_prefix: str = 'api/v2/data', rma_format: str = 'json', page_size: int = 5000, **kwargs)[source]#
Bases:
HttpEngine- property format_query_string#
bmtk.utils.brain_observatory.rma_template module#
- class bmtk.utils.brain_observatory.rma_template.Api(api_base_url_string=None)[source]#
Bases:
object- cleanup_truncated_file(file_path)[source]#
Helper for removing files.
- Parameters:
- file_pathstring
Absolute path including the file name to remove.
- construct_well_known_file_download_url(well_known_file_id)[source]#
Join data api endpoint and id.
- Parameters:
- well_known_file_idinteger or string representing an integer
well known file id
- Returns:
- string
the well-known-file download url for the current api api server
See also
retrieve_file_over_httpCan be used to retrieve the file from the url.
- default_api_url = 'http://api.brain-map.org'#
- do_query(url_builder_fn, json_traversal_fn, *args, **kwargs)[source]#
Bundle an query url construction function with a corresponding response json traversal function.
- Parameters:
- url_builder_fnfunction
A function that takes parameters and returns an rma url.
- json_traversal_fnfunction
A function that takes a json-parsed python data structure and returns data from it.
- postboolean, optional kwarg
True does an HTTP POST, False (default) does a GET
- argsarguments
Arguments to be passed to the url builder function.
- kwargskeyword arguments
Keyword arguments to be passed to the rma builder function.
- Returns:
- any type
The data extracted from the json response.
Examples
- do_rma_query(rma_builder_fn, json_traversal_fn, *args, **kwargs)[source]#
Bundle an RMA query url construction function with a corresponding response json traversal function.
- ..note:: Deprecated in AllenSDK 0.9.2
do_rma_query will be removed in AllenSDK 1.0, it is replaced by do_query because the latter is more general.
- Parameters:
- rma_builder_fnfunction
A function that takes parameters and returns an rma url.
- json_traversal_fnfunction
A function that takes a json-parsed python data structure and returns data from it.
- argsarguments
Arguments to be passed to the rma builder function.
- kwargskeyword arguments
Keyword arguments to be passed to the rma builder function.
- Returns:
- any type
The data extracted from the json response.
Examples
- download_url = 'http://download.alleninstitute.org'#
- json_msg_query(url, dataframe=False)[source]#
- Common case where the url is fully constructed
and the response data is stored in the ‘msg’ field.
- Parameters:
- urlstring
Where to get the data in json form
- dataframeboolean
True converts to a pandas dataframe, False (default) doesn’t
- Returns:
- dict or DataFrame
returned data; type depends on dataframe option
- load_api_schema()[source]#
Download the RMA schema from the current RMA endpoint
- Returns:
- dict
the parsed json schema message
Notes
This information and other Allen Brain Atlas Data Portal Data Model documentation is also available as a Class Hierarchy and Class List.
- read_data(parsed_json)[source]#
Return the message data from the parsed query.
- Parameters:
- parsed_jsondict
A python structure corresponding to the JSON data returned from the API.
Notes
See API Response Formats - Response Envelope for additional documentation.
- retrieve_file_over_http(url, file_path, zipped=False)[source]#
Get a file from the data api and save it.
- Parameters:
- urlstring
Url[Ra05fa143f916-1]_ from which to get the file.
- file_pathstring
Absolute path including the file name to save.
- zippedbool, optional
If true, assume that the response is a zipped directory and attempt to extract contained files into the directory containing file_path. Default is False.
See also
construct_well_known_file_download_urlCan be used to construct the url.
References
[1]Allen Brain Atlas Data Portal: Downloading a WellKnownFile.
- retrieve_parsed_json_over_http(url, post=False)[source]#
Get the document and put it in a Python data structure
- Parameters:
- urlstring
Full API query url.
- postboolean
True does an HTTP POST, False (default) encodes the URL and does a GET
- Returns:
- dict
Result document as parsed by the JSON library.
- retrieve_xml_over_http(url)[source]#
Get the document and put it in a Python data structure
- Parameters:
- urlstring
Full API query url.
- Returns:
- string
Unparsed xml string.
- class bmtk.utils.brain_observatory.rma_template.RmaApi(base_uri=None)[source]#
Bases:
ApiSee: RESTful Model Access (RMA)
- ALL = 'all'#
- COUNT = 'count'#
- CRITERIA = 'rma::criteria'#
- DEBUG = 'debug'#
- EQ = '$eq'#
- EXCEPT = 'except'#
- EXCPT = 'excpt'#
- FALSE = 'false'#
- INCLUDE = 'rma::include'#
- IS = '$is'#
- MODEL = 'model::'#
- NUM_ROWS = 'num_rows'#
- ONLY = 'only'#
- OPTIONS = 'rma::options'#
- ORDER = 'order'#
- PIPE = 'pipe::'#
- PREVIEW = 'preview'#
- SERVICE = 'service::'#
- START_ROW = 'start_row'#
- TABULAR = 'tabular'#
- TRUE = 'true'#
- build_query_url(stage_clauses, fmt='json')[source]#
Combine one or more RMA query stages into a single RMA query.
- Parameters:
- stage_clauseslist of strings
subqueries
- fmtstring, optional
json (default), xml, or csv
- Returns:
- string
complete RMA url
- build_schema_query(clazz=None, fmt='json')[source]#
Build the URL that will fetch the data schema.
- Parameters:
- clazzstring, optional
Name of a specific class or None (default).
- fmtstring, optional
json (default) or xml
- Returns:
- urlstring
The constructed URL
Notes
If a class is specified, only the schema information for that class will be requested, otherwise the url requests the entire schema.
- debug_clause(debug_value=None)[source]#
Construct a debug clause for use in an rma::options clause. Parameters ———- debug_value : string or boolean
True, False, None (default) or ‘preview’
- Returns:
- clausestring
The query clause for inclusion in an RMA query URL.
Notes
True will request debugging information in the response. False will request no debugging information. None will return an empty clause. ‘preview’ will request debugging information without the query being run.
- filter(key, value)[source]#
serialize a single RMA query filter clause.
- Parameters:
- keystring
keys for narrowing a query.
- valuestring
value for narrowing a query.
- Returns:
- string
a single filter clause for an RMA query string.
- filters(filters)[source]#
serialize RMA query filter clauses.
- Parameters:
- filtersdict
keys and values for narrowing a query.
- Returns:
- string
filter clause for an RMA query string.
- model_query(*args, **kwargs)[source]#
Construct and execute a model stage of an RMA query string.
- Parameters:
- modelstring
The top level data type
- filtersdict
key, value comparisons applied to the top-level model to narrow the results.
- criteriastring
raw RMA criteria clause to choose what object are returned
- includestring
raw RMA include clause to return associated objects
- onlylist of strings, optional
to be joined into an rma::options only filter to limit what data is returned
- exceptlist of strings, optional
to be joined into an rma::options except filter to limit what data is returned
- excptlist of strings, optional
synonym for except parameter to avoid a reserved word conflict.
- tabularlist of string, optional
return columns as a tabular data structure rather than a nested tree.
- countboolean, optional
False to skip the extra database count query.
- debugstring, optional
‘true’, ‘false’ or ‘preview’
- num_rowsint or string, optional
how many database rows are returned (may not correspond directly to JSON tree structure)
- start_rowint or string, optional
which database row is start of returned data (may not correspond directly to JSON tree structure)
Notes
See RMA Path Syntax for a brief overview of the normalized RMA syntax. Normalized RMA syntax differs from the legacy syntax used in much of the RMA documentation. Using the &debug=true option with an RMA URL will include debugging information in the response, including the normalized query.
- model_stage(model, **kwargs)[source]#
Construct a model stage of an RMA query string.
- Parameters:
- modelstring
The top level data type
- filtersdict
key, value comparisons applied to the top-level model to narrow the results.
- criteriastring
raw RMA criteria clause to choose what object are returned
- includestring
raw RMA include clause to return associated objects
- onlylist of strings, optional
to be joined into an rma::options only filter to limit what data is returned
- exceptlist of strings, optional
to be joined into an rma::options except filter to limit what data is returned
- tabularlist of string, optional
return columns as a tabular data structure rather than a nested tree.
- countboolean, optional
False to skip the extra database count query.
- debugstring, optional
‘true’, ‘false’ or ‘preview’
- num_rowsint or string, optional
how many database rows are returned (may not correspond directly to JSON tree structure)
- start_rowint or string, optional
which database row is start of returned data (may not correspond directly to JSON tree structure)
Notes
See RMA Path Syntax for a brief overview of the normalized RMA syntax. Normalized RMA syntax differs from the legacy syntax used in much of the RMA documentation. Using the &debug=true option with an RMA URL will include debugging information in the response, including the normalized query.
- only_except_tabular_clause(filter_type, attribute_list)[source]#
Construct a clause to filter which attributes are returned for use in an rma::options clause.
- Parameters:
- filter_typestring
‘only’, ‘except’, or ‘tabular’
- attribute_listlist of strings
for example [‘acronym’, ‘products.name’, ‘structure.id’]
- Returns:
- clausestring
The query clause for inclusion in an RMA query URL.
Notes
The title of tabular columns can be set by adding ‘+as+<title>’ to the attribute. The tabular filter type requests a response that is row-oriented rather than a nested structure. Because of this, the tabular option can mask the lazy query behavior of an rma::include clause. The tabular option does not mask the inner-join behavior of an rma::include clause. The tabular filter is required for .csv format RMA requests.
- options_clause(**kwargs)[source]#
build rma:: options clause.
- Parameters:
- onlylist of strings, optional
- exceptlist of strings, optional
- tabularlist of string, optional
- countboolean, optional
- debugstring, optional
‘true’, ‘false’ or ‘preview’
- num_rowsint or string, optional
- start_rowint or string, optional
- order_clause(order_list=None)[source]#
Construct a debug clause for use in an rma::options clause.
- Parameters:
- order_listlist of strings
for example [‘acronym’, ‘products.name+asc’, ‘structure.id+desc’]
- Returns:
- clausestring
The query clause for inclusion in an RMA query URL.
Notes
Optionally adding ‘+asc’ (default) or ‘+desc’ after an attribute will change the sort order.
- pipe_stage(pipe_name, parameters)[source]#
Connect model and service stages via their JSON responses.
Notes
- quote_string(the_string)[source]#
Wrap a clause in single quotes.
- Parameters:
- the_stringstring
a clause to be included in an rma query that needs to be quoted
- Returns:
- string
input wrapped in single quotes
- service_query(*args, **kwargs)[source]#
Construct and Execute a single-stage RMA query to send a request to a connected service.
- Parameters:
- service_namestring
Name of a documented connected service.
- parametersdict
key-value pairs as in the online documentation.
Notes
- service_stage(service_name, parameters=None)[source]#
Construct an RMA query fragment to send a request to a connected service.
- Parameters:
- service_namestring
Name of a documented connected service.
- parametersdict
key-value pairs as in the online documentation.
Notes
- tuple_filters(filters)[source]#
Construct an RMA filter clause.
Notes
See RMA Path Syntax - Square Brackets for Filters for additional documentation.
- class bmtk.utils.brain_observatory.rma_template.RmaTemplate(base_uri=None, query_manifest=None)[source]#
Bases:
RmaApi
- bmtk.utils.brain_observatory.rma_template.stream_file_over_http(url, file_path, timeout=(9.05, 31.1))[source]#
Supply an http get request and stream the response to a file.
- Parameters:
- urlstr
Send the request to this url
- file_pathstr
Stream the response to this path
- timeoutfloat or tuple of float, optional
Specify a timeout for the request. If a tuple, specify seperate connect and read timeouts.
- bmtk.utils.brain_observatory.rma_template.stream_zip_directory_over_http(url, directory, members=None, timeout=(9.05, 31.1))[source]#
Supply an http get request and stream the response to a file.
- Parameters:
- urlstr
Send the request to this url
- directorystr
Extract the response to this directory
- memberslist of str, optional
Extract only these files
- timeoutfloat or tuple of float, optional
Specify a timeout for the request. If a tuple, specify seperate connect and read timeouts.
bmtk.utils.brain_observatory.stimulus_info module#
- class bmtk.utils.brain_observatory.stimulus_info.BinaryIntervalSearchTree(search_list)[source]#
Bases:
object
- class bmtk.utils.brain_observatory.stimulus_info.BrainObservatoryMonitor(experiment_geometry=None)[source]#
Bases:
Monitorhttp://help.brain-map.org/display/observatory/Documentation?preview=/10616846/10813485/VisualCoding_VisualStimuli.pdf # noqa: E501 https://www.cnet.com/products/asus-pa248q/specs/
- class bmtk.utils.brain_observatory.stimulus_info.ExperimentGeometry(distance, mon_height_cm, mon_width_cm, mon_res, eyepoint)[source]#
Bases:
object- property warp_coordinates#
- class bmtk.utils.brain_observatory.stimulus_info.Monitor(n_pixels_r, n_pixels_c, panel_size, spatial_unit)[source]#
Bases:
object- property aspect_ratio#
- grating_to_screen(phase, spatial_frequency, orientation, distance_from_monitor, p2p_amp=256, baseline=127, translation=(0, 0))[source]#
- property height#
- lsn_image_to_screen(img, stimulus_type, origin='lower', background_color=127, translation=(0, 0))[source]#
- property mask#
- property panel_size#
- property pixel_size#
- property width#
- bmtk.utils.brain_observatory.stimulus_info.all_stimuli()[source]#
Return a list of all stimuli in the data set
- bmtk.utils.brain_observatory.stimulus_info.get_spatial_grating(height=None, aspect_ratio=None, ori=None, pix_per_cycle=None, phase=None, p2p_amp=2, baseline=0)[source]#
- bmtk.utils.brain_observatory.stimulus_info.get_spatio_temporal_grating(t, temporal_frequency=None, **kwargs)[source]#
- bmtk.utils.brain_observatory.stimulus_info.lsn_coordinate_to_monitor_coordinate(lsn_coordinate, monitor_shape, stimulus_type)[source]#
- bmtk.utils.brain_observatory.stimulus_info.make_display_mask(display_shape=(1920, 1200))[source]#
Build a display-shaped mask that indicates which pixels are on screen after warping the stimulus.
- bmtk.utils.brain_observatory.stimulus_info.map_monitor_coordinate_to_stimulus_coordinate(monitor_coordinate, monitor_shape, stimulus_type)[source]#
- bmtk.utils.brain_observatory.stimulus_info.map_monitor_coordinate_to_template_coordinate(monitor_coord, monitor_shape, template_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.map_stimulus(source_stimulus_coordinate, source_stimulus_type, target_stimulus_type, monitor_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.map_stimulus_coordinate_to_monitor_coordinate(template_coordinate, monitor_shape, stimulus_type)[source]#
- bmtk.utils.brain_observatory.stimulus_info.map_template_coordinate_to_monitor_coordinate(template_coord, monitor_shape, template_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.mask_stimulus_template(template_display_coords, template_shape, display_mask=None, threshold=1.0)[source]#
Build a mask for a stimulus template of a given shape and display coordinates that indicates which part of the template is on screen after warping.
- Parameters:
- template_display_coords: list
list of (x,y) display coordinates
- template_shape: tuple
(width,height) of the display template
- display_mask: np.ndarray
boolean 2D mask indicating which display coordinates are on screen after warping.
- threshold: float
Fraction of pixels associated with a template display coordinate that should remain on screen to count as belonging to the mask.
- Returns:
- tuple: (template mask, pixel fraction)
- bmtk.utils.brain_observatory.stimulus_info.monitor_coordinate_to_lsn_coordinate(monitor_coordinate, monitor_shape, stimulus_type)[source]#
- bmtk.utils.brain_observatory.stimulus_info.monitor_coordinate_to_natural_movie_coordinate(monitor_coordinate, monitor_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.natural_movie_coordinate_to_monitor_coordinate(natural_movie_coordinate, monitor_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.natural_scene_coordinate_to_monitor_coordinate(natural_scene_coordinate, monitor_shape)[source]#
- bmtk.utils.brain_observatory.stimulus_info.sessions_with_stimulus(stimulus)[source]#
Return the names of the sessions that contain a given stimulus.
- bmtk.utils.brain_observatory.stimulus_info.stimuli_in_session(session, allow_unknown=True)[source]#
Return a list what stimuli are available in a given session.
- Parameters:
- session: string
- Must be one of: [
stimulus_info.THREE_SESSION_A, stimulus_info.THREE_SESSION_B, stimulus_info.THREE_SESSION_C, stimulus_info.THREE_SESSION_C2 ]
- bmtk.utils.brain_observatory.stimulus_info.translate_image_and_fill(img, translation=(0, 0))[source]#
- bmtk.utils.brain_observatory.stimulus_info.warp_stimulus_coords(vertices, distance=15.0, mon_height_cm=32.5, mon_width_cm=51.0, mon_res=(1920, 1200), eyepoint=(0.5, 0.5))[source]#
For a list of screen vertices, provides a corresponding list of texture coordinates.
- Parameters:
- vertices: numpy.ndarray
[[x0,y0], [x1,y1], …] A set of vertices to convert to texture positions.
- distance: float
distance from the monitor in cm.
- mon_height_cm: float
monitor height in cm
- mon_width_cm: float
monitor width in cm
- mon_res: tuple
monitor resolution (x,y)
- eyepoint: tuple
- Returns:
- np.ndarray
x,y coordinates shaped like the input that describe what pixel coordinates are displayed an the input coordinates after warping the stimulus.
bmtk.utils.brain_observatory.utils module#
- bmtk.utils.brain_observatory.utils.file_hash_from_path(file_path)[source]#
Return the hexadecimal file hash for a file
- Parameters:
- file_path: Union[str, Path]
path to a file
- Returns:
- str:
The file hash (Blake2b; hexadecimal) of the file