All modules for which code is available
-- mica.archive.aca_dark.dark_cal -
- mica.archive.aca_hdr3 -
- mica.archive.aca_l0 -
- mica.archive.asp_l1 -
- mica.archive.cda.services +
- mica.archive.cda.services
- mica.archive.obsid_archive
- mica.archive.obspar
- mica.bad_obsids @@ -71,8 +67,6 @@
- mica.stats.acq_stats
- mica.stats.guide_stats
- mica.utils -
- mica.vv.process -
- mica.vv.vv
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/archive/obsid_archive.html b/docs/_modules/mica/archive/obsid_archive.html index 956855d3..583c1961 100644 --- a/docs/_modules/mica/archive/obsid_archive.html +++ b/docs/_modules/mica/archive/obsid_archive.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » @@ -842,7 +842,7 @@
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/bad_obsids.html b/docs/_modules/mica/bad_obsids.html index 4dbd735b..e7321d7b 100644 --- a/docs/_modules/mica/bad_obsids.html +++ b/docs/_modules/mica/bad_obsids.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/starcheck/starcheck.html b/docs/_modules/mica/starcheck/starcheck.html index e761877d..1fe30593 100644 --- a/docs/_modules/mica/starcheck/starcheck.html +++ b/docs/_modules/mica/starcheck/starcheck.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/stats/acq_stats.html b/docs/_modules/mica/stats/acq_stats.html index 3dd47a76..368cec27 100644 --- a/docs/_modules/mica/stats/acq_stats.html +++ b/docs/_modules/mica/stats/acq_stats.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/stats/guide_stats.html b/docs/_modules/mica/stats/guide_stats.html index 3a35e1b4..be455cd3 100644 --- a/docs/_modules/mica/stats/guide_stats.html +++ b/docs/_modules/mica/stats/guide_stats.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/utils.html b/docs/_modules/mica/utils.html index a5051fcf..1582bbeb 100644 --- a/docs/_modules/mica/utils.html +++ b/docs/_modules/mica/utils.html @@ -4,11 +4,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation »
- Module code » diff --git a/docs/_modules/mica/vv/process.html b/docs/_modules/mica/vv/process.html deleted file mode 100644 index 6f01c9b2..00000000 --- a/docs/_modules/mica/vv/process.html +++ /dev/null @@ -1,318 +0,0 @@ - - - - - - -
- - mica 4.35.1 documentation + mica 4.35.2 documentation » @@ -84,17 +84,17 @@
dark_temp_scale()
: get the temperature scaling correction
-get_dark_cal_dirs()
: get an ordered dict of dark cal identifer and directory
-get_dark_cal_image()
: get a single dark cal image
-get_dark_cal_props()
: get properties (e.g. date, temperature) of a dark cal
-get_dark_cal_props_table()
: get properties of dark cals over a time range as a table
+get_dark_cal_dirs()
: get an ordered dict of dark cal identifer and directory
+get_dark_cal_image()
: get a single dark cal image
+get_dark_cal_props()
: get properties (e.g. date, temperature) of a dark cal
+get_dark_cal_props_table()
: get properties of dark cals over a time range as a table- - mica 4.35.1 documentation + mica 4.35.2 documentation » @@ -74,7 +74,7 @@
- - mica 4.35.1 documentation + mica 4.35.2 documentation » @@ -74,7 +74,7 @@
- - mica 4.35.1 documentation + mica 4.35.2 documentation » diff --git a/docs/api.html b/docs/api.html index ea7c4739..1f2460a8 100644 --- a/docs/api.html +++ b/docs/api.html @@ -5,11 +5,11 @@ -
- - mica 4.35.1 documentation + mica 4.35.2 documentation » @@ -68,491 +68,23 @@
- -mica.archive.aca_dark.dark_cal.dark_id_to_date(dark_id)[source]¶ -
Convert
-dark_id
(YYYYDOY) to the corresponding CxoTime ‘date’ format.-
-
- Parameters: -
date – dark id (YYYYDOY)
-
-- Returns: -
str in CxoTime ‘date’ format
-
-
- -mica.archive.aca_dark.dark_cal.date_to_dark_id(date)[source]¶ -
Convert
-date
to the corresponding YYYYDOY format for a dark cal identifiers.-
-
- Parameters: -
date – any CxoTime compatible format
-
-- Returns: -
dark id (YYYYDOY)
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_dirs(dark_cals_dir='/proj/sot/ska/data/mica/archive/aca_dark')[source]¶ -
Get an ordered dict of directory paths containing dark current calibration files, -where the key is the dark cal identifier (YYYYDOY) and the value is the path.
--
-
- Parameters: -
dark_cals_dir – directory containing dark cals.
-
-- Returns: -
ordered dict of absolute directory paths
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_id(date, select='before', dark_cal_ids=None)[source]¶ -
Return the dark calibration id corresponding to
-date
.If
-select
is'before'
(default) then use the first calibration which -occurs beforedate
. Other valid options are'after'
and'nearest'
.-
-
- Parameters: -
-
-
date – date in any CxoTime format
-select – method to select dark cal (before|nearest|after)
-dark_cal_ids – list of all dark-cal IDs (optional, the output of get_dark_cal_ids)
-
-- Returns: -
dark cal id string (YYYYDOY)
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_ids(dark_cals_dir='/proj/sot/ska/data/mica/archive/aca_dark')[source]¶ -
Get an ordered dict dates as keys and dark cal identifiers (YYYYDOY) as values.
--
-
- Parameters: -
dark_cals_dir – directory containing dark cals.
-
-- Returns: -
ordered dict of absolute directory paths
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_image(date, select='before', t_ccd_ref=None, aca_image=False, allow_negative=False)[source]¶ -
Return the dark calibration image (e-/s) nearest to
-date
.If
-select
is'before'
(default) then use the first calibration which -occurs beforedate
. Other valid options are'after'
and'nearest'
.-
-
- Parameters: -
-
-
date – date in any CxoTime format
-select – method to select dark cal (before|nearest|after)
-t_ccd_ref – rescale dark map to temperature (degC, default=no scaling)
-aca_image – return an ACAImage instance instead of ndarray
-allow_negative – allow negative values in raw dark map (default=False)
-
-- Returns: -
1024 x 1024 ndarray with dark cal image in e-/s
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_props(date, select='before', include_image=False, t_ccd_ref=None, aca_image=False, allow_negative=False)[source]¶ -
Return a dark calibration properties structure for
-date
If
-select
is'before'
(default) then use the first calibration which -occurs beforedate
. Other valid options are'after'
and'nearest'
.If
-include_image
is True then an additional column or keyimage
is -defined which contains the corresponding 1024x1024 dark cal image.-
-
- Parameters: -
-
-
date – date in any CxoTime format
-select – method to select dark cal (before|nearest|after)
-include_image – include the dark cal images in output (default=False)
-t_ccd_ref – rescale dark map to temperature (degC, default=no scaling)
-aca_image – return an ACAImage instance instead of ndarray
-allow_negative – allow negative values in raw dark map (default=False)
-
-- Returns: -
dict of dark calibration properties
-
-
- -mica.archive.aca_dark.dark_cal.get_dark_cal_props_table(start=None, stop=None, include_image=False, as_table=True)[source]¶ -
Return a table of dark calibration properties between
-start
andstop
.If
-include_image
is True then an additional column or keyimage
is -defined which contains the corresponding 1024x1024 dark cal image.If
-as_table
is True (default) then the result is an astropy Table object. -If False then a list of dicts is returned. In this case the full contents -of the properties file including replica properties is available.-
-
- Parameters: -
-
-
start – start time (default=beginning of mission)
-stop – stop time (default=now)
-include_image – include the dark cal images in output (default=False)
-as_table – return a Table instead of a list (default=True)
-
-- Returns: -
astropy Table or list of dark calibration properties
-
-
- -class mica.archive.aca_hdr3.MSID(msid, start, stop, msid_data=None, filter_bad=False)[source]¶ -
ACA header 3 data object to work with header 3 data from -available 8x8 ACA L0 telemetry:
--->>> from mica.archive import aca_hdr3 ->>> ccd_temp = aca_hdr3.MSID('ccd_temp', '2012:001', '2012:020') ->>> type(ccd_temp.vals) -'numpy.ma.core.MaskedArray' -
When given an
-msid
andstart
andstop
range, the object will -query the ACA L0 archive to populate the object, which includes the MSID -values (vals
) at the given times (times
).The parameter
-msid_data
is used to create an MSID object from -the data of another MSID object.When
-filter_bad
is supplied then only valid data values are stored -and thevals
andtimes
attributes are np.ndarray instead of -ma.MaskedArray.-
-
- Parameters: -
-
-
msid – MSID name
-start – Chandra.Time compatible start time
-stop – Chandra.Time compatible stop time
-msid_data – data dictionary or object from another MSID object
-filter_bad – remove missing values
-
-
- -class mica.archive.aca_hdr3.MSIDset(msids, start, stop)[source]¶ -
ACA header 3 data object to work with header 3 data from -available 8x8 ACA L0 telemetry. An MSIDset works with multiple -MSIDs simultaneously.
--->>> from mica.archive import aca_hdr3 ->>> perigee_data = aca_hdr3.MSIDset(['ccd_temp', 'aca_temp', 'dac'], -... '2012:001', '2012:030') -
-
-
- Parameters: -
-
-
msids – list of MSIDs
-start – Chandra.Time compatible start time
-stop – Chandra.Time compatible stop time
-
-
- -mica.archive.aca_l0.get_slot_data(start, stop, slot, imgsize=None, db=None, data_root=None, columns=None, centered_8x8=False)[source]¶ -
For a the given parameters, retrieve telemetry and construct a -masked array of the MSIDs available in that telemetry.
--->>> from mica.archive import aca_l0 ->>> slot_data = aca_l0.get_slot_data('2012:001', '2012:002', slot=7) ->>> temp_ccd_8x8 = aca_l0.get_slot_data('2005:001', '2005:010', -... slot=6, imgsize=[8], -... columns=['TIME', 'TEMPCCD']) -
-
-
- Parameters: -
-
-
start – start time of requested interval
-stop – stop time of requested interval
-slot – slot number integer (in the range 0 -> 7)
-imgsize – list of integers of desired image sizes -(defaults to all -> [4, 6, 8])
-db – handle to archive lookup table
-data_root – parent directory that contains archfiles.db3 -(for use when db handle not available)
-columns – list of desired columns in the ACA0 telemetry -(defaults to all in 8x8 telemetry)
-centered_8x8 – boolean flag to reshape the IMGRAW field to (-1, 8, 8) -(defaults to False)
-
-- Returns: -
data structure for slot
-
-- Return type: -
numpy masked recarray
-
-
- -mica.archive.aca_l0.get_files(obsid=None, start=None, stop=None, slots=None, imgsize=None, db=None, data_root=None)[source]¶ -
Retrieve list of files from ACA0 archive lookup table that -match arguments. The database query returns files with
--
-tstart < stop -and -tstop > start
-which returns all files that contain any part of the interval -between start and stop. If the obsid argument is provided, the -archived obspar tstart/tstop (sybase aca.obspar table) are used.
--->>> from mica.archive import aca_l0 ->>> obsid_files = aca_l0.get_files(obsid=5438) ->>> time_files = aca_l0.get_files(start='2012:001', stop='2012:002') ->>> time_8x8 = aca_l0.get_files(start='2011:001', stop='2011:010', -... imgsize=[8]) -
-
-
- Parameters: -
-
-
obsid – obsid
-start – start time of requested interval
-stop – stop time of requested interval
-slots – list of integers of desired image slots to retrieve -(defaults to all -> [0, 1, 2, 3, 4, 5, 6, 7, 8])
-imgsize – list of integers of desired image sizes -(defaults to all -> [4, 6, 8])
-db – handle to archive lookup table
-data_root – parent directory of Ska aca l0 archive
-
-- Returns: -
interval files
-
-- Return type: -
list
-
-
- -mica.archive.aca_l0.get_l0_images(start, stop, slot, imgsize=None, columns=None)[source]¶ -
Get ACA L0 images for the given
-start
andstop
times and -the givenslot
. Optionally filter on image size viaimgsize
-or change the default image metadata viacolumns
.-->>> from mica.archive import aca_l0 ->>> imgs = aca_l0.get_l0_images('2012:001', '2012:002', slot=7) ->>> imgs = aca_l0.get_l0_images('2005:001', '2005:002', slot=6, imgsize=[8]) -
The default columns are: -[‘TIME’, ‘IMGROW0’, ‘IMGCOL0’, ‘BGDAVG’, ‘IMGSTAT’, ‘IMGFUNC1’, ‘IMGSIZE’, ‘IMGSCALE’, ‘INTEG’]
-The image pixel values are given in units of DN. One can convert to e-/sec -by multiplying by (5 / INTEG).
--
-
- Parameters: -
-
-
start – start time of requested interval
-stop – stop time of requested interval
-slot – slot number integer (in the range 0 -> 7)
-imgsize – list of integers of desired image sizes (default=[4, 6, 8])
-columns – image meta-data columns
-
-- Returns: -
list of ACAImage objects
-
-
- -mica.archive.asp_l1.get_files(obsid=None, start=None, stop=None, revision=None, content=None)[source]¶ -
List asp_l1 files for an obsid or a time range.
--->>> from mica.archive import asp_l1 ->>> obs_files = asp_l1.get_files(6000) ->>> obs_gspr = asp_l1.get_files(6000, content=['GSPROPS']) ->>> range_fidpr = asp_l1.get_files(start='2012:001', -... stop='2012:030', -... content=['FIDPROPS']) -
The available content types are: ASPQUAL, ASPSOL, ASPSOLOBI, ACACAL, -ACA_BADPIX, FIDPROPS, GYROCAL, GSPROPS, and ACACENT.
--
-
- Parameters: -
-
-
obsid – obsid
-start – time range start (Chandra.Time compatible)
-stop – time range stop (Chandra.Time compatible)
-revision – revision integer or ‘last’ -defaults to current released version
-content – archive CONTENT type -defaults to all available ASP1 types
-
-- Returns: -
full path of files matching query
-
-
- -mica.archive.asp_l1.get_dir(obsid)[source]¶ -
Get ASP L1 directory for default/released products for an obsid.
--->>> from mica.archive import asp_l1 ->>> asp_l1.get_dir(2121) -'/proj/sot/ska/data/mica/archive/asp1/02/02121' -
-
-
- Parameters: -
obsid – obsid
-
-- Returns: -
directory
-
-- Return type: -
string
-
-
- -mica.archive.asp_l1.get_obs_dirs(obsid)[source]¶ -
Get all ASP L1 directories for an obsid in the Ska file archive.
--->>> from mica.archive import asp_l1 ->>> obsdirs = asp_l1.get_obs_dirs(6000) -
obsdirs will look something like:
---{'default': '/proj/sot/ska/data/mica/archive/asp1/06/06000', -2: '/proj/sot/ska/data/mica/archive/asp1/06/06000_v02', -3: '/proj/sot/ska/data/mica/archive/asp1/06/06000_v03', -'last': '/proj/sot/ska/data/mica/archive/asp1/06/06000', -'revisions': [2, 3]} -
-
-
- Parameters: -
obsid – obsid
-
-- Returns: -
map of obsid version to directories
-
-- Return type: -
dictionary
-
-
- -mica.archive.asp_l1.get_atts(obsid=None, start=None, stop=None, revision=None, filter=True)[source]¶ -
Get the ground aspect solution quaternions and times covering obsid or start to stop, -in the ACA frame.
--
-
- Obsid: -
obsid
-
-- Start: -
start time (DateTime compat)
-
-- Stop: -
stop time (DateTime compat)
-
-- Revision: -
aspect pipeline processing revision (integer version, None, or ‘last’)
-
-- Filter: -
boolean, true means returned values will not include quaternions during times when asp_sol_status is non-zero
-
-- Returns: -
Nx4 np.array of quaternions, np.array of N times, list of dict with header from each asol file.
-
-
- -mica.archive.asp_l1.get_atts_from_files(asol_files, acal_files, aqual_files, filter=True)[source]¶ -
From ASP1 source files (asol, acal, aqual) get the ground aspect solution quaternions and times covering -the range of asol_files in the ACA frame. The asol, acl, and aqual files are assumed to have one-to-one correspondence -(though the asol to acal times are checked).
--
-
- Asol_files: -
list of aspect asol1 files
-
-- Acal_files: -
list of acal1 files associated with asol_files
-
-- Aqual_files: -
list of aqual files associated with asol_files
-
-- Filter: -
boolean, true means returned values will not include quaternions during times when asp_sol_status is non-zero
-
-- Returns: -
Nx4 np.array of quaternions, np.array of N times, list of dict with header from each asol file.
-
-
- -mica.vv.get_vv(obsid, version='default')[source]¶ -
Retrieve V&V data for an obsid/version. -This reads the saved JSON and returns the previously- -calculated V&V data.
--
-
- Parameters: -
-
-
obsid – obsid
-version – ‘last’, ‘default’, or version number
-
-- Returns: -
dict of V&V data
-
-
- -mica.vv.get_vv_dir(obsid, version='default')[source]¶ -
Get directory containing V&V products for a requested obsid/version, -including plots and json.
--
-
- Parameters: -
-
-
obsid – obsid
-version – ‘last’, ‘default’ or version number
-
-- Returns: -
directory name for obsid/version
-
-
- -mica.vv.get_vv_files(obsid, version='default')[source]¶ -
Get list of V&V files available for a requested obsid/version.
--
-
- Parameters: -
-
-
obsid – obsid
-version – ‘default’, ‘last’ or version number
-
-- Returns: -
list of files
-
-
- -mica.vv.get_rms_data()[source]¶ -
Retrieve/return all data from RMS trending H5 archive
--
-
- Returns: -
numpy array of RMS data for each star/obsid/version
-
-
- -mica.vv.get_arch_vv(obsid, version='last')[source]¶ -
Given obsid and version, find archived ASP1 and obspar products and -run V&V. Effort is made to find the obspar that was actually used during -creation of the ASP1 products.
--
-
- Parameters: -
-
-
obsid – obsid
-version – ‘last’, ‘default’, or revision number of ASP1 products
-
-- Returns: -
mica.vv.Obi V&V object
-
-
All modules for which code is available
Source code for mica.archive.aca_dark.dark_cal
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-import re
-import os
-from collections import OrderedDict
-import json
-
-import six
-from six.moves import zip
-
-import numpy as np
-from astropy.io import fits, ascii
-from astropy.table import Column
-import pyyaks.context
-from cxotime import CxoTime
-from chandra_aca.aca_image import ACAImage
-
-from chandra_aca.dark_model import dark_temp_scale
-from chandra_aca.dark_model import DARK_SCALE_4C # noqa
-from mica.cache import lru_cache
-from mica.common import MICA_ARCHIVE_PATH, MissingDataError
-from . import file_defs
-
-DARK_CAL = pyyaks.context.ContextDict('dark_cal')
-
-MICA_FILES = pyyaks.context.ContextDict('update_mica_files',
- basedir=MICA_ARCHIVE_PATH)
-MICA_FILES.update(file_defs.MICA_FILES)
-
-
-
-[docs]
-def date_to_dark_id(date):
- """
- Convert ``date`` to the corresponding YYYYDOY format for a dark cal identifiers.
-
- :param date: any CxoTime compatible format
- :returns: dark id (YYYYDOY)
- """
- time = CxoTime(date)
- date_str = time.date
- if not time.shape:
- return date_str[:4] + date_str[5:8]
- date_str = np.atleast_1d(date_str)
- chars = date_str.view((str, 1)).reshape(-1, date_str.dtype.itemsize // 4)
- result = np.hstack([chars[:, :4], chars[:, 5:8]])
- result = np.frombuffer(result.tobytes(), dtype=(str, 7))
- return result.reshape(time.shape)
-
-
-
-
-[docs]
-def dark_id_to_date(dark_id):
- """
- Convert ``dark_id`` (YYYYDOY) to the corresponding CxoTime 'date' format.
-
- :param date: dark id (YYYYDOY)
- :returns: str in CxoTime 'date' format
- """
- shape = np.shape(dark_id)
- if not shape:
- return '{}:{}'.format(dark_id[:4], dark_id[4:])
- b = np.atleast_1d(dark_id).view((str, 1)).reshape(-1, 7)
- sep = np.array([':'] * len(b))[:, None]
- b = np.hstack([b[:, :4], sep, b[:, 4:]])
- b = np.frombuffer(b.tobytes(), dtype=(str, 8))
- return b.reshape(shape)
-
-
-
-
-[docs]
-@lru_cache()
-def get_dark_cal_dirs(dark_cals_dir=MICA_FILES['dark_cals_dir'].abs):
- """
- Get an ordered dict of directory paths containing dark current calibration files,
- where the key is the dark cal identifier (YYYYDOY) and the value is the path.
-
- :param dark_cals_dir: directory containing dark cals.
- :returns: ordered dict of absolute directory paths
- """
- dark_cal_ids = sorted([fn for fn in os.listdir(dark_cals_dir)
- if re.match(r'[12]\d{6}$', fn)])
- dark_cal_dirs = [os.path.join(dark_cals_dir, id_)
- for id_ in dark_cal_ids]
- return OrderedDict(zip(dark_cal_ids, dark_cal_dirs))
-
-
-
-
-[docs]
-@lru_cache()
-def get_dark_cal_ids(dark_cals_dir=MICA_FILES['dark_cals_dir'].abs):
- """
- Get an ordered dict dates as keys and dark cal identifiers (YYYYDOY) as values.
-
- :param dark_cals_dir: directory containing dark cals.
- :returns: ordered dict of absolute directory paths
- """
- dark_cal_ids = sorted([fn for fn in os.listdir(dark_cals_dir)
- if re.match(r'[12]\d{6}$', fn)])
- dates = [CxoTime(d[:4] + ':' + d[4:]).date for d in dark_cal_ids]
- return OrderedDict(zip(dates, dark_cal_ids))
-
-
-
-
-[docs]
-def get_dark_cal_id(date, select='before', dark_cal_ids=None):
- """
- Return the dark calibration id corresponding to ``date``.
-
- If ``select`` is ``'before'`` (default) then use the first calibration which
- occurs before ``date``. Other valid options are ``'after'`` and ``'nearest'``.
-
- :param date: date in any CxoTime format
- :param select: method to select dark cal (before|nearest|after)
- :param dark_cal_ids: list of all dark-cal IDs (optional, the output of get_dark_cal_ids)
-
- :returns: dark cal id string (YYYYDOY)
- """
- dark_id = _get_dark_cal_id_vector(date, select=select, dark_cal_ids=dark_cal_ids)
- if not dark_id.shape:
- # returning an instance of the type, not a numpy array
- return dark_id.tolist()
- return dark_id
-
-
-
-def _get_dark_cal_id_scalar(date, select='before', dark_cal_ids=None):
- if dark_cal_ids is None:
- dark_cal_ids = list(get_dark_cal_dirs().keys())
- dark_id = date_to_dark_id(date)
-
- # Special case if dark_id is exactly an existing dark cal then return that dark
- # cal regardless of the select method.
- if dark_id in dark_cal_ids:
- return dark_id
-
- date_secs = CxoTime(date).secs
- dark_cal_secs = CxoTime(np.array([dark_id_to_date(id_) for id_ in dark_cal_ids])).secs
-
- if select == 'nearest':
- ii = np.argmin(np.abs(dark_cal_secs - date_secs))
- elif select in ('before', 'after'):
- ii = np.searchsorted(dark_cal_secs, date_secs)
- if select == 'before':
- ii -= 1
- else:
- raise ValueError('select arg must be one of "nearest", "before", or "after"')
-
- if ii < 0:
- earliest = CxoTime(dark_cal_secs[0]).date[:8]
- raise MissingDataError(
- f'No dark cal found before {earliest}'
- f'(requested dark cal on {date})'
- )
-
- try:
- out_dark_id = dark_cal_ids[ii]
- except IndexError:
- raise MissingDataError('No dark cal found {} {}'.format(select, date))
-
- return out_dark_id
-
-
-_get_dark_cal_id_vector = np.vectorize(_get_dark_cal_id_scalar, excluded=['select', 'dark_cal_ids'])
-
-
-@DARK_CAL.cache
-def _get_dark_cal_image_props(date, select='before', t_ccd_ref=None, aca_image=False,
- allow_negative=False):
- """
- Return the dark calibration image (e-/s) nearest to ``date`` and the corresponding
- dark_props file.
-
- :param date: date in any CxoTime format
- :param select: method to select dark cal (before|nearest|after)
- :param t_ccd_ref: rescale dark map to temperature (degC, default=no scaling)
- :param aca_image: return an AcaImage instance
- :param allow_negative: allow negative values in raw dark map (default=False)
-
- :returns: 1024 x 1024 ndarray with dark cal image in e-/s, props dict
- """
- DARK_CAL['id'] = get_dark_cal_id(date, select)
-
- with fits.open(MICA_FILES['dark_image.fits'].abs, memmap=False) as hdus:
- dark = hdus[0].data
- # Recast as native byte ordering since FITS is typically not. This
- # statement is normally the same as dark.astype(np.float32).
- dark = dark.astype(dark.dtype.type)
-
- with open(MICA_FILES['dark_props.json'].abs, 'r') as fh:
- props = json.load(fh)
-
- # Change unicode to ascii at top level
- if six.PY2:
- keys = list(props.keys())
- asciiprops = {}
- for key in keys:
- asciiprops[str(key)] = props[key]
- props = asciiprops
-
- if t_ccd_ref is not None:
- # Scale factor to adjust data to an effective temperature of t_ccd_ref.
- # For t_ccds warmer than t_ccd_ref this scale factor is < 1, i.e. the
- # observed dark current is made smaller to match what it would be at the
- # lower reference temperature.
- t_ccd = props['ccd_temp']
- dark *= dark_temp_scale(t_ccd, t_ccd_ref)
-
- if not allow_negative:
- np.clip(dark, a_min=0, a_max=None, out=dark)
-
- if aca_image:
- dark = ACAImage(dark, row0=-512, col0=-512)
-
- return dark, props
-
-
-
-[docs]
-def get_dark_cal_image(date, select='before', t_ccd_ref=None, aca_image=False,
- allow_negative=False):
- """
- Return the dark calibration image (e-/s) nearest to ``date``.
-
- If ``select`` is ``'before'`` (default) then use the first calibration which
- occurs before ``date``. Other valid options are ``'after'`` and ``'nearest'``.
-
- :param date: date in any CxoTime format
- :param select: method to select dark cal (before|nearest|after)
- :param t_ccd_ref: rescale dark map to temperature (degC, default=no scaling)
- :param aca_image: return an ACAImage instance instead of ndarray
- :param allow_negative: allow negative values in raw dark map (default=False)
-
- :returns: 1024 x 1024 ndarray with dark cal image in e-/s
- """
- dark, props = _get_dark_cal_image_props(date, select=select, t_ccd_ref=t_ccd_ref,
- aca_image=aca_image,
- allow_negative=allow_negative)
- return dark
-
-
-
-
-[docs]
-def get_dark_cal_props(date, select='before', include_image=False, t_ccd_ref=None,
- aca_image=False, allow_negative=False):
- """
- Return a dark calibration properties structure for ``date``
-
- If ``select`` is ``'before'`` (default) then use the first calibration which
- occurs before ``date``. Other valid options are ``'after'`` and ``'nearest'``.
-
- If ``include_image`` is True then an additional column or key ``image`` is
- defined which contains the corresponding 1024x1024 dark cal image.
-
- :param date: date in any CxoTime format
- :param select: method to select dark cal (before|nearest|after)
- :param include_image: include the dark cal images in output (default=False)
- :param t_ccd_ref: rescale dark map to temperature (degC, default=no scaling)
- :param aca_image: return an ACAImage instance instead of ndarray
- :param allow_negative: allow negative values in raw dark map (default=False)
-
- :returns: dict of dark calibration properties
- """
- dark, props = _get_dark_cal_image_props(date, select=select, t_ccd_ref=None,
- aca_image=aca_image,
- allow_negative=allow_negative)
-
- if include_image:
- props['image'] = dark
-
- return props
-
-
-
-
-[docs]
-def get_dark_cal_props_table(start=None, stop=None, include_image=False, as_table=True):
- """
- Return a table of dark calibration properties between ``start`` and ``stop``.
-
- If ``include_image`` is True then an additional column or key ``image`` is
- defined which contains the corresponding 1024x1024 dark cal image.
-
- If ``as_table`` is True (default) then the result is an astropy Table object.
- If False then a list of dicts is returned. In this case the full contents
- of the properties file including replica properties is available.
-
- :param start: start time (default=beginning of mission)
- :param stop: stop time (default=now)
- :param include_image: include the dark cal images in output (default=False)
- :param as_table: return a Table instead of a list (default=True)
-
- :returns: astropy Table or list of dark calibration properties
- """
- start_id = date_to_dark_id('1999:001:12:00:00' if start is None else start)
- stop_id = date_to_dark_id(stop)
- dark_dirs = [dark_id for dark_id in get_dark_cal_dirs()
- if dark_id >= start_id and dark_id <= stop_id]
-
- # Get the list of properties structures
- props = [get_dark_cal_props(dark_id, include_image=include_image) for dark_id in dark_dirs]
-
- if as_table:
- # Get rid of a non-scalar data structures and collect col names
- names = []
- for prop in props:
- for key, val in prop.copy().items():
- if isinstance(val, (list, tuple, dict)):
- del prop[key]
- elif key not in names:
- names.append(key)
-
- lines = []
- lines.append(','.join(names))
- for prop in props:
- vals = [str(prop.get(name, '')) for name in names]
- lines.append(','.join(vals))
- table_props = ascii.read(lines, format='csv')
-
- if include_image:
- x = np.vstack([prop['image'][np.newaxis, :] for prop in props])
- images = Column(x)
- table_props['image'] = images
- props = table_props
-
- return props
-
-
Source code for mica.archive.aca_hdr3
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-"""
-Experimental/alpha code to work with ACA L0 Header 3 data
-"""
-import re
-import numpy as np
-import numpy.ma as ma
-import collections
-from scipy.interpolate import interp1d
-from Chandra.Time import DateTime
-from Ska.Numpy import search_both_sorted
-
-from mica.archive import aca_l0
-from mica.common import MissingDataError
-
-
-#In case it isn't obvious, for an MSID of HD3TLM<I><W> in ACA image data
-#for slot <S>, that maps to table 11.1 like:
-# Image No. = <S>
-# Image type = <I>
-# Hdr 3 Word = <W>
-
-
-def two_byte_sum(byte_msids, scale=1):
- def func(slot_data):
- return ((slot_data[byte_msids[0]].astype('int') >> 7) * (-1 * 65535)
- + (slot_data[byte_msids[0]].astype('int') << 8)
- + (slot_data[byte_msids[1]].astype('int'))) * scale
- return func
-
-
-# 8x8 header values (largest possible ACA0 header set)
-ACA_DTYPE = [('TIME', '>f8'), ('QUALITY', '>i4'), ('IMGSIZE', '>i4'),
- ('HD3TLM62', '|u1'),
- ('HD3TLM63', '|u1'), ('HD3TLM64', '|u1'), ('HD3TLM65', '|u1'),
- ('HD3TLM66', '|u1'), ('HD3TLM67', '|u1'), ('HD3TLM72', '|u1'),
- ('HD3TLM73', '|u1'), ('HD3TLM74', '|u1'), ('HD3TLM75', '|u1'),
- ('HD3TLM76', '|u1'), ('HD3TLM77', '|u1'), ('FILENAME', '<U128')
- ]
-
-ACA_DTYPE_NAMES = [k[0] for k in ACA_DTYPE]
-
-
-a_to_d = [
-('3D70', -40),
-('3C9E', -35),
-('3B98', -30),
-('3A55', -25),
-('38CD', -20),
-('36FD', -15),
-('34E1', -10),
-('3279', -5),
-('2FCB', 0),
-('2CDF', 5),
-('29C5', 10),
-('2688', 15),
-('2340', 20),
-('2000', 25),
-('1CD3', 30),
-('19CC', 35),
-('16F4', 40),
-('1454', 45),
-('11EF', 50),
-('0FC5', 55),
-('0DD8', 60),
-('0C22', 65),
-('0A9F', 70),
-('094D', 75),
-('0825', 80)]
-
-# reverse this end up with increasing hex values
-a_to_d = a_to_d[::-1]
-a_to_d = np.rec.fromrecords(a_to_d, names=['hex', 'tempC'])
-x = np.array([int(a, 16) for a in a_to_d['hex']])
-ad_func = interp1d(x, a_to_d['tempC'], kind='cubic', bounds_error=False)
-
-
-def ad_temp(msids):
-
- def func(slot_data):
- sum = two_byte_sum(msids)(slot_data)
- # As of scipy 0.17 cannot interpolate a masked array. In this
- # case we can temporarily fill with some value that will always
- # be in the range, then re-mask afterward.
- masked = isinstance(sum, np.ma.MaskedArray)
- if masked:
- mask = sum.mask
- sum = sum.filled(16000)
- out = ad_func(sum)
- if masked:
- out = np.ma.MaskedArray(out, mask=mask)
- return out
-
- return func
-
-# dictionary that defines the header 3 'MSID's.
-# Also includes a value key that describes how to determine the value
-# of the MSID
-
-HDR3_DEF = {
- '062': {'desc': 'AC status word',
- 'msid': 'ac_status_word',
- 'longdesc': """
-AC status word. A status word read from the AC. The bits in the word are
-defined as follows:
-
-xxxx xxrr rrtt eccp
-
-X = spare bits
-R = CCD readout mode
- 1 => S/H input is grounded during pixel readout
- 2 => CCD reset is pulsed during column flush
- 4 => CCD reset/sw is pulsed during row shifts
- 8 => MUX is switched to ground when A/D not in use
-T = Test signal select (test signals not available in flight)
-E = Cmd error. AC cmd input buffer was overwritten
-C = Clock period for parallel shifts
- 0 => 6 microsec
- 1 => 12 microsec
- 2 => 24 microsec
- 3 => 48 microsec
-P = PromWrite? flag: true when AC EEPROM is in the write mode.
-"""},
-
- '064': {'desc': 'Misc status bits',
- 'msid': 'misc_status_bits',
- 'longdesc': """
-Miscellaneous status bits showing the status of the following 16 flag variables
-starting with the LSB and ending with the MSB:
-
-bit 0 (LSB): AcSendTimeOut?
-bit 1: AcIdleTimeOut?
-bit 2: TecActive?
-bit 3: TecHeat?
-bit 4: DecAcTable?
-bit 5: AcTableCkSumOK?
-bit 6: StackError?
-bit 7: WarmBoot?
-bit 8: IdleCode LSB
-bit 9: CalMode?
-bit 10: CalModePending?
-bit 11: IuData?
-bit 12: IuDataPending?
-bit 13: DsnFixed?
-bit 14: InitialCalFillOK?
-bit 15 (MSB): IoUpdTimeout?
-"""},
-
- '066': {'desc': 'A/D CCD molyb therm 1',
- 'msid': 'ccd_molyb_therm_1',
- 'value': ad_temp(['HD3TLM66', 'HD3TLM67']),
- 'longdesc': """
-A/D converter reading for the CCD moly base thermistor number 1
-"""},
-
- '072': {'desc': 'A/D CCD molyb therm 2',
- 'msid': 'ccd_molyb_therm_2',
- 'value': ad_temp(['HD3TLM72', 'HD3TLM73']),
- 'longdesc': """
-A/D converter reading for the CCD moly base thermistor number 2
-"""},
-
- '074': {'desc': 'A/D CCD detector therm',
- 'msid': 'ccd_det_therm',
- 'value': ad_temp(['HD3TLM74', 'HD3TLM75']),
- 'longdesc': """
-A/D converter reading for the CCD detector thermistor
-"""},
-
- '076': {'desc': 'A/D +5 volt PS',
- 'msid': 'ad_5v_ps',
- 'value': two_byte_sum(['HD3TLM76', 'HD3TLM77'], scale=0.20518),
- 'longdesc': """
-A/D converter reading for the +5 volt power supply; 1 LSB=0.30518 mv
-"""},
-
- '162': {'desc': 'A/D +15 volt PS',
- 'msid': 'ad_15v_ps',
- 'value': two_byte_sum(['HD3TLM62', 'HD3TLM63'], scale=0.61035),
- 'longdesc': """
-A/D converter reading for the +15 volt power supply; 1 LSB=0.61035 mv
-"""},
-
- '164': {'desc': 'A/D -15 volt PS',
- 'msid': 'ad_m15v_ps',
- 'value': two_byte_sum(['HD3TLM64', 'HD3TLM65'], scale=0.61035),
- 'longdesc': """
-A/D converter reading for the -15 volt power supply; 1 LSB=0.61035 mv
-"""},
-
- '166': {'desc': 'A/D +27 volt PS',
- 'msid': 'ad_27v_ps',
- 'value': two_byte_sum(['HD3TLM66', 'HD3TLM67'], scale=1.04597),
- 'longdesc': """
-A/D converter reading for the +27 volt power supply; 1 LSB=1.04597 mv
-"""},
-
- '172': {'desc': 'A/D analog ground',
- 'msid': 'ad_analog_gnd',
- 'value': ad_temp(['HD3TLM72', 'HD3TLM73']),
- 'longdesc': """
-A/D converter reading for analog ground; 1 LSB=0.30518 mv
-"""},
-
- '174': {'desc': 'A/D for A/D convertor therm',
- 'msid': 'ad_converter_therm',
- 'value': ad_temp(['HD3TLM74', 'HD3TLM75']),
- 'longdesc': """
-A/D converter reading for the A/D converter thermistor.
-"""},
-
- '176': {'desc': 'A/D secondary mirror therm. HRMA side',
- 'msid': 'ad_smhs_therm',
- 'value': ad_temp(['HD3TLM76', 'HD3TLM77']),
- 'longdesc': """
-A/D converter reading for the secondary mirror thermistor, HRMA side
-"""},
-
- '262': {'desc': 'A/D secondary mirror therm. Opp HRMA side',
- 'msid': 'ad_smohs_therm',
- 'value': ad_temp(['HD3TLM62', 'HD3TLM63']),
- 'longdesc': """
-A/D converter reading for the secondary mirror thermistor, Opposite from the
-HRMA side
-"""},
-
- '264': {'desc': 'A/D primary mirror therm. HRMA side',
- 'msid': 'ad_pmhs_therm',
- 'value': ad_temp(['HD3TLM64', 'HD3TLM65']),
- 'longdesc': """
-A/D converter reading for the primary mirror thermistor, HRMA side
-"""},
-
- '266': {'desc': 'A/D primary mirror therm. Opp HRMA side',
- 'msid': 'ad_pmohs_therm',
- 'value': ad_temp(['HD3TLM66', 'HD3TLM67']),
- 'longdesc': """
-A/D converter reading for the primary mirror thermistor, opposite from the
-HRMA side
-"""},
-
- '272': {'desc': 'A/D AC housing therm. HRMA side',
- 'msid': 'ad_achhs_therm',
- 'value': ad_temp(['HD3TLM72', 'HD3TLM73']),
- 'longdesc': """
-A/D converter reading for the AC housing thermistor, HRMA side
-"""},
-
- '274': {'desc': 'A/D AC housing therm. Opp HRMA side',
- 'msid': 'ad_achohs_therm',
- 'value': ad_temp(['HD3TLM74', 'HD3TLM75']),
- 'longdesc': """
-A/D converter reading for the AC housing thermistor, opposite HRMA side
-"""},
-
- '276': {'desc': 'A/D lens cell therm.',
- 'msid': 'ad_lc_therm',
- 'value': ad_temp(['HD3TLM76', 'HD3TLM77']),
- 'longdesc': """
-A/D converter reading for the lens cell thermistor
-"""},
-
- '362': {'desc': 'Processor stack pointer and telem update counter',
- 'msid': 'proc_stack_telem_ctr',
- 'longdesc': """
-A word containing the processor data stack pointer in the high byte, and
-an update counter in the low byte that increments once for every 1.025
-second telemetry update.
-"""},
-
- '364': {'desc': 'Science header pulse period',
- 'msid': 'sci_hdr_pulse_period',
- 'longdesc': """
-The science header pulse period, as measured by the PEA; 1 LSB = 2 microseconds
-""",
- 'nbytes': 4},
-
- '372': {'desc': '16-bit zero offset for pixels from CCD quad A',
- 'msid': 'zero_off16_quad_a',
- 'longdesc': """
-A 16-bit zero offset for pixels read from CCD quadrant A; 1 LSB = 1 A/D
-converter count (nominally 5 electrons)
-"""},
-
- '374': {'desc': '16-bit zero offset for pixels from CCD quad B',
- 'msid': 'zero_off16_quad_b',
- 'longdesc': """
-A 16-bit zero offset for pixels read from CCD quadrant B; 1 LSB = 1 A/D
-converter count (nominally 5 electrons)
-"""},
-
- '376': {'desc': '16-bit zero offset for pixels from CCD quad C',
- 'msid': 'zero_off16_quad_c',
- 'longdesc': """
-A 16-bit zero offset for pixels read from CCD quadrant C; 1 LSB = 1 A/D
-converter count (nominally 5 electrons)
-"""},
-
- '462': {'desc': '16-bit zero offset for pixels from CCD quad D',
- 'msid': 'zero_off16_quad_d',
- 'longdesc': """
-A 16-bit zero offset for pixels read from CCD quadrant D; 1 LSB = 1 A/D
-converter count (nominally 5 electrons)
-"""},
-
- '464': {'desc': '32-bit zero offset for pixels from CCD quad A',
- 'msid': 'zero_off32_quad_a',
- 'longdesc': """
-A 32-bit zero offset for pixels read from CCD quadrant A; 1 LSB = 2^-16
-A/D converter counts
-""",
- 'nbytes': 4},
-
- '472': {'desc': '32-bit zero offset for pixels from CCD quad B',
- 'msid': 'zero_off32_quad_b',
- 'longdesc': """
-A 32-bit zero offset for pixels read from CCD quadrant B; 1 LSB = 2^-16
-A/D converter counts
-""",
- 'nbytes': 4},
-
- '476': {'desc': '32-bit zero offset for pixels from CCD quad C',
- 'msid': 'zero_off32_quad_c',
- 'longdesc': """
-A 32-bit zero offset for pixels read from CCD quadrant C; 1 LSB = 2^-16
-A/D converter counts
-""",
- 'nbytes': 4},
-
- '564': {'desc': '32-bit zero offset for pixels from CCD quad D',
- 'msid': 'zero_off32_quad_d',
- 'longdesc': """
-A 32-bit zero offset for pixels read from CCD quadrant D; 1 LSB = 2^-16
-A/D converter counts
-""",
- 'nbytes': 4},
-
- '572': {'desc': 'last CCD flush duration',
- 'msid': 'ccd_flush_dur',
- 'longdesc': """
-The time required for the most recent flush of the CCD; 1 LSB=2 microseconds
-"""},
-
- '574': {'desc': 'CCD row shift clock period',
- 'msid': 'ccd_row_shift_period',
- 'longdesc': """
-The CCD row shift clock period currently in effect; 1 LSB = 1 microsecond
-"""},
-
- '576': {'desc': 'average backround reading',
- 'msid': 'avg_bkg',
- 'longdesc': """
-An overall average background reading derived from the most recent CCD readout.
-This is an average from all tracked images and from all search readout
-segments. One LSB = 1 A/D converter count (nominally 5 electrons).
-""",
- 'value': two_byte_sum(['HD3TLM76', 'HD3TLM77'], scale=5)},
-
- '662': {'desc': 'Header 1 for DSN records',
- 'msid': 'dsn_hdr1',
- 'longdesc': """
-Header 1 for Deep Space Network record.
-""",
- 'nbytes': 6},
-
- '672': {'desc': 'record counter and Header 2 for DSN',
- 'msid': 'dsn_hdr2',
- 'longdesc': """
-The record counter and Header 2 for Deep Space Network records.
-The record counter occupies the three high order bytes, and Header 2
-occupies the low order byte.
-""",
- 'nbytes': 4},
-
- '676': {'desc': 'CCD temperature',
- 'msid': 'ccd_temp',
- 'longdesc': """
-CCD temperature. 1 LSB=0.01/(2^16) degrees C. The high order 16 bits give the
-CCD temperature in units of 1 LSB = 0.01 degrees C.
-""",
- 'nbytes': 4,
- 'value': two_byte_sum(['HD3TLM76', 'HD3TLM77'], scale=0.01)},
-
- '764': {'desc': 'CCD setpoint',
- 'msid': 'ccd_setpoint',
- 'longdesc': """
-The CCD temperature control setpoint; 1 LSB=0.01 degrees C
-""",
- 'value': two_byte_sum(['HD3TLM64', 'HD3TLM65'])},
-
- '766': {'desc': 'temperature for position/angle cal',
- 'msid': 'aca_temp',
- 'longdesc': """
-The temperature used in the angle calibration equations that convert star
-positions from CCD row and column coordinates to Y and Z angles for OBC
-telemetry; 1 LSB = 1/256 degrees C.
-""",
- 'nbytes': 4,
- 'value': two_byte_sum(['HD3TLM72', 'HD3TLM73'],
- scale=1 / 256.)},
- '774': {'desc': 'RAM address of last write-read test failure',
- 'msid': 'last_ram_fail_addr',
- 'longdesc': """
-The address in RAM of the failure most recently detected by the RAM
-write-and-read test
-"""},
-
- '776': {'desc': 'TEC DAC number',
- 'msid': 'dac',
- 'value': two_byte_sum(['HD3TLM76', 'HD3TLM77']),
- 'longdesc': """
-The number most recently written to the TEC power control DAC.
-"""}}
-
-MSID_ALIASES = {HDR3_DEF[key]['msid']: key for key in HDR3_DEF}
-
-
-
-[docs]
-class MSID(object):
- """
- ACA header 3 data object to work with header 3 data from
- available 8x8 ACA L0 telemetry::
-
- >>> from mica.archive import aca_hdr3
- >>> ccd_temp = aca_hdr3.MSID('ccd_temp', '2012:001', '2012:020')
- >>> type(ccd_temp.vals)
- 'numpy.ma.core.MaskedArray'
-
- When given an ``msid`` and ``start`` and ``stop`` range, the object will
- query the ACA L0 archive to populate the object, which includes the MSID
- values (``vals``) at the given times (``times``).
-
- The parameter ``msid_data`` is used to create an MSID object from
- the data of another MSID object.
-
- When ``filter_bad`` is supplied then only valid data values are stored
- and the ``vals`` and ``times`` attributes are `np.ndarray` instead of
- `ma.MaskedArray`.
-
- :param msid: MSID name
- :param start: Chandra.Time compatible start time
- :param stop: Chandra.Time compatible stop time
- :param msid_data: data dictionary or object from another MSID object
- :param filter_bad: remove missing values
- """
-
- def __init__(self, msid, start, stop, msid_data=None, filter_bad=False):
- if msid_data is None:
- msid_data = MSIDset([msid], start, stop)[msid]
- self.msid = msid
- self.tstart = DateTime(start).secs
- self.tstop = DateTime(stop).secs
- self.datestart = DateTime(self.tstart).date
- self.datestop = DateTime(self.tstop).date
- # msid_data may be dictionary or object with these
- # attributes
- for attr in ('hdr3_msid', 'vals', 'times', 'desc', 'longdesc'):
- if hasattr(msid_data, attr):
- setattr(self, attr, getattr(msid_data, attr))
- else:
- setattr(self, attr, msid_data.get(attr))
-
- # If requested filter out bad values and set self.bad = None
- if filter_bad:
- self.filter_bad()
-
- def copy(self):
- from copy import deepcopy
- return deepcopy(self)
-
-
-[docs]
- def filter_bad(self, copy=False):
- """Filter out any missing values.
-
- After applying this method the ``vals`` attributes will be a
- plain np.ndarray object instead of a masked array.
-
- :param copy: return a copy of MSID object with bad values filtered
- """
- obj = self.copy() if copy else self
-
- if isinstance(obj.vals, ma.MaskedArray):
- obj.times = obj.times[~obj.vals.mask]
- obj.vals = obj.vals.compressed()
-
- if copy:
- return obj
-
-
-
-
-class Msid(MSID):
- """
- ACA header 3 data object to work with header 3 data from available 8x8 ACA L0
- telemetry.
-
- >>> from mica.archive import aca_hdr3
- >>> ccd_temp = aca_hdr3.Msid('ccd_temp', '2012:001', '2012:020')
- >>> type(ccd_temp.vals)
- 'numpy.ndarray'
-
- When given an ``msid`` and ``start`` and ``stop`` range, the object will
- query the ACA L0 archive to populate the object, which includes the MSID
- values (``vals``) at the given times (``times``). Only valid data values
- are returned.
-
- :param msid: MSID
- :param start: Chandra.Time compatible start time
- :param stop: Chandra.Time compatible stop time
- """
-
- def __init__(self, msid, start, stop):
- super(Msid, self).__init__(msid, start, stop, filter_bad=True)
-
-
-def confirm_msid(req_msid):
- """
- Check to see if the 'MSID' is an alias or is in the HDR3_DEF
- dictionary. If in the aliases, return the unaliased value.
- :param req_msid: requested msid
- :return: hdr3_def MSID name
- """
- if req_msid in MSID_ALIASES:
- return MSID_ALIASES[req_msid]
- else:
- if req_msid not in HDR3_DEF:
- raise MissingDataError("msid %s not found" % req_msid)
- else:
- return req_msid
-
-
-def slot_for_msid(msid):
- """
- For a given 'MSID' return the slot number that contains those data.
- """
- mmatch = re.match(r'(\d)\d\d', msid)
- slot = int(mmatch.group(1))
- return slot
-
-
-
-[docs]
-class MSIDset(collections.OrderedDict):
- """
- ACA header 3 data object to work with header 3 data from
- available 8x8 ACA L0 telemetry. An MSIDset works with multiple
- MSIDs simultaneously.
-
- >>> from mica.archive import aca_hdr3
- >>> perigee_data = aca_hdr3.MSIDset(['ccd_temp', 'aca_temp', 'dac'],
- ... '2012:001', '2012:030')
-
- :param msids: list of MSIDs
- :param start: Chandra.Time compatible start time
- :param stop: Chandra.Time compatible stop time
- """
- def __init__(self, msids, start, stop):
- super(MSIDset, self).__init__()
- self.tstart = DateTime(start).secs
- self.tstop = DateTime(stop).secs
- self.datestart = DateTime(self.tstart).date
- self.datestop = DateTime(self.tstop).date
- slot_datas = {}
- slots = set(slot_for_msid(confirm_msid(msid)) for msid in msids)
- for slot in slots:
- # get the 8x8 data
- tstop = self.tstop + 33.0 # Major frame of padding
- slot_data = aca_l0.get_slot_data(
- self.tstart, tstop, slot,
- imgsize=[8], columns=ACA_DTYPE_NAMES)
-
- # Find samples where the time stamp changes by a value other than 4.1 secs
- # (which is the value for 8x8 readouts). In that case there must have been a
- # break in L0 decom, typically due to a change to 4x4 or 6x6 data.
- # t[0] = 1.0
- # t[1] = 5.1 <= This record could be bad, as indicated by the gap afterward
- # t[2, 3] = 17.4, 21.5
- # To form the time diffs first add `tstop` to the end so that if 8x8 data
- # does not extend through `tstop` then the last record gets chopped.
- dt = np.diff(np.concatenate([slot_data['TIME'], [tstop]]))
- bad = np.abs(dt - 4.1) > 1e-3
- slot_data[bad] = ma.masked
-
- # Chop off the padding
- i_stop = np.searchsorted(slot_data['TIME'], self.tstop, side='right')
- slot_data = slot_data[:i_stop]
-
- # explicitly unmask useful columns
- slot_data['TIME'].mask = ma.nomask
- slot_data['IMGSIZE'].mask = ma.nomask
- slot_data['FILENAME'].mask = ma.nomask
- slot_datas[slot] = slot_data
- # make a shared time ndarray that is the union of the time sets in the
- # slots. The ACA L0 telemetry has the same timestamps across slots,
- # so the only differences here are caused by different times in
- # non-TRAK across the slots (usually SRCH differences at the beginning
- # of the observation)
- shared_time = np.unique(np.concatenate([
- slot_datas[slot]['TIME'].data for slot in slots]))
- for msid in msids:
- hdr3_msid = confirm_msid(msid)
- slot = slot_for_msid(hdr3_msid)
- full_data = ma.zeros(len(shared_time),
- dtype=slot_datas[slot].dtype)
- full_data.mask = ma.masked
- fd_idx = search_both_sorted(shared_time,
- slot_datas[slot]['TIME'])
- full_data[fd_idx] = slot_datas[slot]
- # make a data dictionary to feed to the MSID constructor
- slot_data = {'vals': HDR3_DEF[hdr3_msid]['value'](full_data),
- 'desc': HDR3_DEF[hdr3_msid]['desc'],
- 'longdesc': HDR3_DEF[hdr3_msid]['longdesc'],
- 'times': shared_time,
- 'hdr3_msid': hdr3_msid}
- self[msid] = MSID(msid, start, stop, slot_data)
-
-
Source code for mica.archive.aca_l0
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-import os
-from glob import glob
-import re
-import logging
-import shutil
-import time
-import gzip
-from astropy.table import Table
-import astropy.io.fits as pyfits
-import numpy as np
-import numpy.ma as ma
-import argparse
-import collections
-import tables
-from itertools import count
-from pathlib import Path
-
-import ska_dbi
-import Ska.arc5gl
-from Chandra.Time import DateTime
-import Ska.File
-from chandra_aca.aca_image import ACAImage
-# import kadi later in obsid_times
-
-from mica.common import MICA_ARCHIVE, MissingDataError
-
-logger = logging.getLogger('aca0 fetch')
-logger.setLevel(logging.INFO)
-logger.addHandler(logging.StreamHandler())
-
-# borrowed from eng_archive
-ARCHFILES_HDR_COLS = ('tstart', 'tstop', 'startmjf', 'startmnf',
- 'stopmjf', 'stopmnf',
- 'tlmver', 'ascdsver', 'revision', 'date',
- 'imgsize')
-
-FILETYPE = {'level': 'L0',
- 'instrum': 'PCAD',
- 'content': 'ACADATA',
- 'arc5gl_query': 'ACA0',
- 'fileglob': 'aca*fits*'}
-
-ACA_DTYPE = (('TIME', '>f8'), ('QUALITY', '>i4'), ('MJF', '>i4'),
- ('MNF', '>i4'),
- ('END_INTEG_TIME', '>f8'), ('INTEG', '>f4'), ('GLBSTAT', '|u1'),
- ('COMMCNT', '|u1'), ('COMMPROG', '|u1'), ('IMGFID1', '|u1'),
- ('IMGNUM1', '|u1'), ('IMGFUNC1', '|u1'), ('IMGSTAT', '|u1'),
- ('IMGROW0', '>i2'), ('IMGCOL0', '>i2'), ('IMGSCALE', '>i2'),
- ('BGDAVG', '>i2'), ('IMGFID2', '|u1'), ('IMGNUM2', '|u1'),
- ('IMGFUNC2', '|u1'), ('BGDRMS', '>i2'), ('TEMPCCD', '>f4'),
- ('TEMPHOUS', '>f4'), ('TEMPPRIM', '>f4'), ('TEMPSEC', '>f4'),
- ('BGDSTAT', '|u1'), ('IMGFID3', '|u1'), ('IMGNUM3', '|u1'),
- ('IMGFUNC3', '|u1'), ('IMGFID4', '|u1'), ('IMGNUM4', '|u1'),
- ('IMGFUNC4', '|u1'), ('IMGRAW', '>f4', (64,)),
- ('HD3TLM62', '|u1'),
- ('HD3TLM63', '|u1'), ('HD3TLM64', '|u1'), ('HD3TLM65', '|u1'),
- ('HD3TLM66', '|u1'), ('HD3TLM67', '|u1'), ('HD3TLM72', '|u1'),
- ('HD3TLM73', '|u1'), ('HD3TLM74', '|u1'), ('HD3TLM75', '|u1'),
- ('HD3TLM76', '|u1'), ('HD3TLM77', '|u1'),
- ('IMGSIZE', '>i4'), ('FILENAME', '<U128'))
-
-ACA_DTYPE_8x8 = tuple((dt[0], dt[1], (8, 8)) if dt[0] == 'IMGRAW' else dt
- for dt in ACA_DTYPE)
-
-ACA_DTYPE_NAMES = tuple([k[0] for k in ACA_DTYPE])
-
-CONFIG = dict(data_root=os.path.join(MICA_ARCHIVE, 'aca0'),
- temp_root=os.path.join(MICA_ARCHIVE, 'temp'),
- days_at_once=30.0,
- sql_def='archfiles_aca_l0_def.sql',
- cda_table='cda_aca0.h5')
-
-
-def get_options():
- parser = argparse.ArgumentParser(
- description="Fetch aca level 0 products and make a file archive")
- defaults = dict(CONFIG)
- parser.set_defaults(**defaults)
- parser.add_argument("--data-root",
- help="parent directory for all data")
- parser.add_argument("--temp-root",
- help="parent temp directory")
- parser.add_argument("--start",
- help="start date for retrieve "
- + "(defaults to max date of archived files)")
- parser.add_argument("--stop",
- help="stop date for retrieve "
- + "(defaults to now)")
- parser.add_argument("--days-at-once",
- type=float,
- help="if over this number, "
- + "bin to chunks of this number of days")
- parser.add_argument("--cda-table",
- help="file name for h5 list from Chandra Archive")
- opt = parser.parse_args()
- return opt
-
-
-
-[docs]
-def get_slot_data(start, stop, slot, imgsize=None,
- db=None, data_root=None, columns=None,
- centered_8x8=False
- ):
- """
- For a the given parameters, retrieve telemetry and construct a
- masked array of the MSIDs available in that telemetry.
-
- >>> from mica.archive import aca_l0
- >>> slot_data = aca_l0.get_slot_data('2012:001', '2012:002', slot=7)
- >>> temp_ccd_8x8 = aca_l0.get_slot_data('2005:001', '2005:010',
- ... slot=6, imgsize=[8],
- ... columns=['TIME', 'TEMPCCD'])
-
- :param start: start time of requested interval
- :param stop: stop time of requested interval
- :param slot: slot number integer (in the range 0 -> 7)
- :param imgsize: list of integers of desired image sizes
- (defaults to all -> [4, 6, 8])
- :param db: handle to archive lookup table
- :param data_root: parent directory that contains archfiles.db3
- (for use when db handle not available)
- :param columns: list of desired columns in the ACA0 telemetry
- (defaults to all in 8x8 telemetry)
- :param centered_8x8: boolean flag to reshape the IMGRAW field to (-1, 8, 8)
- (defaults to False)
- :returns: data structure for slot
- :rtype: numpy masked recarray
- """
- if data_root is None:
- data_root = CONFIG['data_root']
- if columns is None:
- columns = ACA_DTYPE_NAMES
- if imgsize is None:
- imgsize = [4, 6, 8]
- if db is None:
- dbfile = os.path.join(data_root, 'archfiles.db3')
- db = dict(dbi='sqlite', server=dbfile)
-
- data_files = _get_file_records(start, stop, slots=[slot],
- imgsize=imgsize, db=db,
- data_root=data_root)
- aca_dtype = ACA_DTYPE_8x8 if centered_8x8 else ACA_DTYPE
- dtype = [k for k in aca_dtype if k[0] in columns]
-
- if not len(data_files):
- # return an empty masked array
- return ma.zeros(0, dtype=dtype)
- rows = np.sum(data_files['rows'])
- zero_row = ma.zeros(1, dtype=dtype)
- zero_row.mask = ma.masked
- all_rows = zero_row.repeat(rows)
- rowcount = 0
- for f in data_files:
- fp = os.path.join(data_root,
- str(f['year']),
- "{0:03d}".format(f['doy']),
- f['filename'])
- hdu = pyfits.open(fp)
- chunk = hdu[1].data
- idx0, idx1 = rowcount, (rowcount + len(chunk))
- f_imgsize = int(np.sqrt(chunk[0]['IMGRAW'].size))
- for fname in all_rows.dtype.names:
- if fname == 'IMGRAW' or fname == 'IMGSIZE':
- continue
- if fname in chunk.dtype.names:
- all_rows[fname][idx0: idx1] \
- = chunk[fname]
- if 'IMGSIZE' in columns:
- all_rows['IMGSIZE'][idx0: idx1] = f_imgsize
- if 'FILENAME' in columns:
- all_rows['FILENAME'][idx0: idx1] = f['filename']
- if 'IMGRAW' in columns:
- if centered_8x8:
- if f_imgsize == 8:
- all_rows['IMGRAW'][idx0: idx1] = chunk['IMGRAW']
- else:
- i = (8 - f_imgsize) // 2
- all_rows['IMGRAW'][idx0: idx1, i:-i, i:-i] = \
- chunk['IMGRAW']
- else:
- all_rows['IMGRAW'].reshape(rows, 8, 8)[
- idx0: idx1, 0:f_imgsize, 0:f_imgsize] = (
- chunk['IMGRAW'].reshape(len(chunk),
- f_imgsize, f_imgsize))
- rowcount += len(chunk)
-
- # just include the rows in the requested time range in the returned data
- oktime = ((all_rows['TIME'] >= DateTime(start).secs)
- & (all_rows['TIME'] <= DateTime(stop).secs))
- return all_rows[oktime]
-
-
-
-
-[docs]
-def get_l0_images(start, stop, slot, imgsize=None, columns=None):
- """
- Get ACA L0 images for the given ``start`` and ``stop`` times and
- the given ``slot``. Optionally filter on image size via ``imgsize``
- or change the default image metadata via ``columns``.
-
- >>> from mica.archive import aca_l0
- >>> imgs = aca_l0.get_l0_images('2012:001', '2012:002', slot=7)
- >>> imgs = aca_l0.get_l0_images('2005:001', '2005:002', slot=6, imgsize=[8])
-
- The default columns are:
- ['TIME', 'IMGROW0', 'IMGCOL0', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'IMGSCALE', 'INTEG']
-
- The image pixel values are given in units of DN. One can convert to e-/sec
- by multiplying by (5 / INTEG).
-
- :param start: start time of requested interval
- :param stop: stop time of requested interval
- :param slot: slot number integer (in the range 0 -> 7)
- :param imgsize: list of integers of desired image sizes (default=[4, 6, 8])
- :param columns: image meta-data columns
-
- :returns: list of ACAImage objects
- """
- if columns is None:
- columns = ['TIME', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'IMGSCALE', 'INTEG']
- if 'IMGROW0' not in columns:
- columns.append('IMGROW0')
- if 'IMGCOL0' not in columns:
- columns.append('IMGCOL0')
-
- slot_columns = list(set(columns + ['QUALITY', 'IMGRAW', 'IMGSIZE']))
- dat = get_slot_data(start, stop, slot, imgsize=imgsize, columns=slot_columns)
-
- ok = dat['QUALITY'] == 0
- if not(np.all(ok)):
- dat = dat[ok]
-
- # Convert temperatures from K to degC
- for temp in ('TEMPCCD', 'TEMPHOUS', 'TEMPPRIM', 'TEMPSEC'):
- if temp in dat.dtype.names:
- dat[temp] -= 273.15
-
- # Masked array col access ~100 times slower than ndarray so convert
- dat = dat.filled(-9999)
-
- imgs = []
- imgsizes = dat['IMGSIZE']
- imgraws = dat['IMGRAW']
- cols = {name: dat[name] for name in columns}
-
- for i, row in enumerate(dat):
- imgraw = imgraws[i].reshape(8, 8)
- sz = imgsizes[i]
- if sz < 8:
- imgraw = imgraw[:sz, :sz]
-
- meta = {name: col[i] for name, col in cols.items() if col[i] != -9999}
- imgs.append(ACAImage(imgraw, meta=meta))
-
- return imgs
-
-
-
-class MSID(object):
- def __init__(self, msid, slot, start, stop):
- self.tstart = DateTime(start).secs
- self.tstop = DateTime(stop).secs
- self.datestart = DateTime(self.tstart).date
- self.datestop = DateTime(self.tstop).date
- self.slot = slot
- self._check_msid(msid)
- self._get_data()
-
- def _check_msid(self, req_msid):
- if req_msid.upper() in ACA_DTYPE_NAMES:
- self.msid = req_msid.lower()
- else:
- raise MissingDataError("msid %s not found" % req_msid)
-
- def _get_data(self):
- slot_data = get_slot_data(
- self.tstart, self.tstop, self.slot, imgsize=[8],
- columns=['TIME', self.msid.upper()])
- self.vals = slot_data[self.msid.upper()]
- self.times = slot_data['TIME']
-
-
-class MSIDset(collections.OrderedDict):
- def __init__(self, msids, start, stop):
- super(MSIDset, self).__init__()
- self.tstart = DateTime(start).secs
- self.tstop = DateTime(stop).secs
- self.datestart = DateTime(self.tstart).date
- self.datestop = DateTime(self.tstop).date
- for msid in msids:
- self[msid] = MSID(msid, self.tstart, self.tstop)
-
-
-def obsid_times(obsid):
- from kadi import events # Kadi is a big import so defer
- dwells = events.dwells.filter(obsid=obsid)
- n_dwells = len(dwells)
- tstart = dwells[0].tstart
- tstop = dwells[n_dwells - 1].tstop
-
- return tstart, tstop
-
-
-
-[docs]
-def get_files(obsid=None, start=None, stop=None,
- slots=None, imgsize=None, db=None, data_root=None):
- """
- Retrieve list of files from ACA0 archive lookup table that
- match arguments. The database query returns files with
-
- tstart < stop
- and
- tstop > start
-
- which returns all files that contain any part of the interval
- between start and stop. If the obsid argument is provided, the
- archived obspar tstart/tstop (sybase aca.obspar table) are used.
-
- >>> from mica.archive import aca_l0
- >>> obsid_files = aca_l0.get_files(obsid=5438)
- >>> time_files = aca_l0.get_files(start='2012:001', stop='2012:002')
- >>> time_8x8 = aca_l0.get_files(start='2011:001', stop='2011:010',
- ... imgsize=[8])
-
- :param obsid: obsid
- :param start: start time of requested interval
- :param stop: stop time of requested interval
- :param slots: list of integers of desired image slots to retrieve
- (defaults to all -> [0, 1, 2, 3, 4, 5, 6, 7, 8])
- :param imgsize: list of integers of desired image sizes
- (defaults to all -> [4, 6, 8])
- :param db: handle to archive lookup table
- :param data_root: parent directory of Ska aca l0 archive
-
- :returns: interval files
- :rtype: list
- """
- if slots is None:
- slots = [0, 1, 2, 3, 4, 5, 6, 7]
- if data_root is None:
- data_root = CONFIG['data_root']
- if imgsize is None:
- imgsize = [4, 6, 8]
- if db is None:
- dbfile = os.path.join(data_root, 'archfiles.db3')
- db = dict(dbi='sqlite', server=dbfile)
- if obsid is None:
- if start is None or stop is None:
- raise TypeError("Must supply either obsid or start and stop")
- else:
- start, stop = obsid_times(obsid)
-
- file_records = _get_file_records(start, stop,
- slots=slots, imgsize=imgsize, db=db,
- data_root=data_root)
- files = [os.path.join(data_root,
- "%04d" % f['year'],
- "%03d" % f['doy'],
- str(f['filename']))
- for f in file_records]
- return files
-
-
-
-def _get_file_records(start, stop=None, slots=None,
- imgsize=None, db=None, data_root=None):
- """
- Retrieve list of files from ACA0 archive lookup table that
- match arguments. The database query returns files with
-
- tstart < stop
- and
- tstop > start
-
- which returns all files that contain any part of the interval
- between start and stop.
-
- :param start: start time of requested interval
- :param stop: stop time of requested interval
- (default of None will get DateTime(None) which is
- equivalent to 'now')
- :param slots: list of integers of desired image slots to retrieve
- (defaults to all -> [0, 1, 2, 3, 4, 5, 6, 7, 8])
- :param imgsize: list of integers of desired image sizes
- (defaults to all -> [4, 6, 8])
- :param db: handle to archive lookup table
- :param data_root: parent directory of Ska aca l0 archive
-
- :returns: interval files
- :rtype: list
- """
- if slots is None:
- slots = [0, 1, 2, 3, 4, 5, 6, 7]
- if data_root is None:
- data_root = CONFIG['data_root']
- if imgsize is None:
- imgsize = [4, 6, 8]
- if db is None:
- dbfile = os.path.join(data_root, 'archfiles.db3')
- db = dict(dbi='sqlite', server=dbfile)
-
- tstart = DateTime(start).secs
- tstop = DateTime(stop).secs
- imgsize_str = ','.join([str(x) for x in imgsize])
- slot_str = ','.join([str(x) for x in slots])
- # a composite index isn't as fast as just doing a padded search on one
- # index first (tstart). This gets extra files to make sure we don't
- # miss the case where tstart is in the middle of an interval, but
- # drastically reduces the size of the bsearch on the tstop index
- tstart_pad = 10 * 86400
- db_query = ('SELECT * FROM archfiles '
- 'WHERE tstart >= %f - %f '
- 'AND tstart < %f '
- 'AND tstop > %f '
- 'AND slot in (%s) '
- 'AND imgsize in (%s) '
- 'order by filetime asc '
- % (tstart, tstart_pad, tstop, tstart, slot_str, imgsize_str))
- with ska_dbi.DBI(**db) as db:
- files = db.fetchall(db_query)
- return files
-
-
-class Updater(object):
- def __init__(self,
- db=None,
- data_root=None,
- temp_root=None,
- days_at_once=None,
- sql_def=None,
- cda_table=None,
- filetype=None,
- start=None,
- stop=None,
- ):
- for init_opt in ['data_root', 'temp_root', 'days_at_once',
- 'sql_def', 'cda_table']:
- setattr(self, init_opt, vars()[init_opt] or CONFIG[init_opt])
- self.data_root = os.path.abspath(self.data_root)
- self.temp_root = os.path.abspath(self.temp_root)
- self.filetype = filetype or FILETYPE
- if db is None:
- dbfile = os.path.join(data_root, 'archfiles.db3')
- db = dict(dbi='sqlite', server=dbfile, autocommit=False)
- self.db = db
- self.start = start
- self.stop = stop
-
-#def _rebuild_database(db=None, db_file=None,
-# data_root=config['data_root'],
-# sql_def=config['sql_def']):
-# """
-# Utility routine to rebuild the file lookup database using the
-# package defaults and the files in the archive.
-# """
-# if db is None and db_file is None:
-# raise ValueError
-# if db is None and db_file is not None:
-# logger.info("creating archfiles db from %s"
-# % sql_def)
-# db_sql = os.path.join(os.environ['SKA_DATA'],
-# 'mica', sql_def)
-# db_init_cmds = file(db_sql).read()
-# db = ska_dbi.DBI(dbi='sqlite', server=db_file,
-# autocommit=False)
-# db.execute(db_init_cmds, commit=True)
-# year_dirs = sorted(glob(
-# os.path.join(data_root, '[12][0-9][0-9][0-9]')))
-# for ydir in year_dirs:
-# day_dirs = sorted(glob(
-# os.path.join(ydir, '[0-3][0-9][0-9]')))
-# for ddir in day_dirs:
-# archfiles = sorted(glob(
-# os.path.join(ddir, '*_img0*')))
-# db.execute("begin transaction")
-# for i, f in enumerate(archfiles):
-# arch_info = read_archfile(i, f, archfiles, db)
-# if arch_info:
-# db.insert(arch_info, 'archfiles')
-# db.commit()
-
- def _get_missing_archive_files(self, start, only_new=False):
- ingested_files = self._get_arc_ingested_files()
- startdate = DateTime(start).date
- logger.info("Checking for missing files from %s" %
- startdate)
- # find the index in the cda archive list that matches
- # the first entry with the "start" date
- for idate, backcnt in zip(ingested_files['ingest_date'][::-1],
- count(1)):
- if idate < startdate:
- break
-
- last_ok_date = None
- missing = []
- # for the entries after the start date, see if we have the
- # file or a later version
- with ska_dbi.DBI(**self.db) as db:
- for file, idx in zip(ingested_files[-backcnt:], count(0)):
- filename = file['filename']
- db_match = db.fetchall(
- "select * from archfiles where "
- + "filename = '%s' or filename = '%s.gz'"
- % (filename, filename))
- if len(db_match):
- continue
- file_re = re.search(
- r'acaf(\d+)N(\d{3})_(\d)_img0.fits(\.gz)?',
- filename)
- if not file_re:
- continue
- slot = int(file_re.group(3))
- filetime = int(file_re.group(1))
- version = int(file_re.group(2))
- # if there is an old file and we're just looking for new ones
- range_match = db.fetchall(
- """SELECT * from archfiles
- WHERE filetime = %(filetime)d
- and slot = %(slot)d""" % dict(filetime=filetime, slot=slot))
- if range_match and only_new:
- continue
- # if there is a newer file there already
- version_match = db.fetchall(
- """SELECT * from archfiles
- WHERE filetime = %(filetime)d
- and slot = %(slot)d
- and revision >= %(version)d"""
- % dict(slot=slot, version=version, filetime=filetime))
- if version_match:
- continue
- # and if made it this far add the file to the list
- missing.append(file)
- # and update the date through which data is complete to
- # the time of the previous file ingest
- if last_ok_date is None:
- last_ok_date = \
- ingested_files['ingest_date'][-backcnt:][idx - 1]
-
- if last_ok_date is None:
- last_ok_date = ingested_files['ingest_date'][-1]
- return missing, last_ok_date
-
- def _get_arc_ingested_files(self):
- table_file = os.path.join(self.data_root, self.cda_table)
- with tables.open_file(table_file) as h5f:
- tbl = h5f.get_node('/', 'data')
- arc_files = tbl[:]
- return Table(arc_files)
-
- def _get_archive_files(self, start, stop):
- """
- Update FITS file archive with arc5gl and ingest files into file archive
-
- :param filetype: a dictionary (or dictionary-like) object with keys for
- 'level', 'instrum', 'content', 'arc5gl_query'
- and 'fileglob' for arc5gl. For ACA0:
- {'level': 'L0', 'instrum': 'PCAD',
- 'content': 'ACADATA', 'arc5gl_query': 'ACA0',
- 'fileglob': 'aca*fits*'}
- :param start: start of interval to retrieve (Chandra.Time compatible)
- :param stop: end of interval to retrieve (Chandra.Time compatible)
-
- :returns: retrieved file names
- :rtype: list
- """
-
- filetype = self.filetype
- # Retrieve CXC archive files in a temp directory with arc5gl
- arc5 = Ska.arc5gl.Arc5gl()
- arc5.sendline('tstart=%s' % DateTime(start).date)
- arc5.sendline('tstop=%s' % DateTime(stop).date)
- arc5.sendline('get %s' % filetype['arc5gl_query'].lower())
- return sorted(glob(filetype['fileglob']))
-
- def _read_archfile(self, i, f, archfiles):
- """
- Read FITS filename ``f`` with index ``i`` (position within list of
- filenames) and get dictionary of values to store in file lookup
- database. These values include all header items in
- ``ARCHFILES_HDR_COLS`` plus the header checksum, the image slot
- as determined by the filename, the imagesize as determined
- by IMGRAW, the year, day-of-year, and number of data rows in the file.
-
- :param i: index of file f within list of files archfiles
- :param f: filename
- :param archfiles: list of filenames for this batch
- :param db: database handle for file lookup database (ska_dbi handle)
-
- :returns: info for a file.
- :rtype: dictionary
- """
-
- # Check if filename is already in file lookup table
- # If so then delete temporary file and abort further processing.
- filename = os.path.basename(f)
- with ska_dbi.DBI(**self.db) as db:
- if db.fetchall('SELECT filename FROM archfiles WHERE filename=?',
- (filename,)):
- logger.debug(
- 'File %s already in archfiles - unlinking and skipping' % f)
- os.unlink(f)
- return None
-
- logger.debug('Reading (%d / %d) %s' % (i, len(archfiles), filename))
- hdus = pyfits.open(f)
- hdu = hdus[1]
-
- # Accumlate relevant info about archfile that will be ingested
- # (this is borrowed from eng-archive but is set to archive to a
- # database table instead of an h5 table in this version)
- archfiles_row = dict((x, hdu.header.get(x.upper()))
- for x in ARCHFILES_HDR_COLS)
- archfiles_row['checksum'] = hdu.header.get('checksum') or hdu._checksum
- imgsize = hdu.data[0]['IMGRAW'].shape[0]
- archfiles_row['imgsize'] = int(imgsize)
- archfiles_row['slot'] = int(re.search(
- r'acaf\d+N\d{3}_(\d)_img0.fits(\.gz)?',
- filename).group(1))
- archfiles_row['filename'] = filename
- archfiles_row['filetime'] = int(
- re.search(r'(\d+)', archfiles_row['filename']).group(1))
- filedate = DateTime(archfiles_row['filetime']).date
- year, doy = (int(x) for x in
- re.search(r'(\d\d\d\d):(\d\d\d)', filedate).groups())
- archfiles_row['year'] = year
- archfiles_row['doy'] = doy
- archfiles_row['rows'] = len(hdu.data)
- hdus.close()
-
- with ska_dbi.DBI(**self.db) as db:
- # remove old versions of this file
- oldmatches = db.fetchall(
- """SELECT * from archfiles
- WHERE filetime = %(filetime)d
- and slot = %(slot)d
- and startmjf = %(startmjf)d and startmnf = %(startmnf)d
- and stopmjf = %(stopmjf)d and stopmnf = %(stopmnf)d """
- % archfiles_row)
- if len(oldmatches):
- self._arch_remove(oldmatches)
-
- interval_matches = _get_file_records(archfiles_row['tstart'],
- archfiles_row['tstop'],
- slots=[archfiles_row['slot']])
- if len(interval_matches):
- # if there are files there that still overlap the new file
- # and they are all older, remove the old files.
- if np.all(interval_matches['revision']
- < archfiles_row['revision']):
- logger.info(
- "removing overlapping files at older revision(s)")
- logger.info(interval_matches)
- self._arch_remove(interval_matches)
- # if the overlapping files are all from the same revision
- # just ingest them and hope for the best
- elif np.all(interval_matches['revision']
- == archfiles_row['revision']):
- logger.info("ignoring overlap for same process revision")
- # if the files that overlap are all newer, let's not ingest
- # the "missing" file
- elif np.all(interval_matches['revision']
- > archfiles_row['revision']):
- return None
- else:
- logger.error(archfiles_row)
- logger.error(interval_matches)
- # throw an error if there is still overlap
- raise ValueError("Cannot ingest %s, overlaps existing files"
- % filename)
-
- return archfiles_row
-
- def _arch_remove(self, defunct_matches):
- with ska_dbi.DBI(**self.db) as db:
- for file_record in defunct_matches:
- query = ("""delete from archfiles
- WHERE filetime = %(filetime)d
- and slot = %(slot)d
- and startmjf = %(startmjf)d
- and startmnf = %(startmnf)d
- and stopmjf = %(stopmjf)d
- and stopmnf = %(stopmnf)d """
- % file_record)
- logger.info(query)
- db.execute(query)
- db.commit()
- archdir = os.path.abspath(os.path.join(
- self.data_root,
- str(file_record['year']),
- "{0:03d}".format(file_record['doy'])
- ))
- logger.info("deleting %s" %
- os.path.join(archdir, file_record['filename']))
- real_file = os.path.join(archdir, file_record['filename'])
- if os.path.exists(real_file):
- os.unlink(real_file)
-
- def _move_archive_files(self, archfiles):
- """
- Move ACA L0 files into the file archive into directories
- by YYYY/DOY under the specified data_root
- """
-
- data_root = self.data_root
- if not os.path.exists(data_root):
- os.makedirs(data_root)
- for f in archfiles:
- if not os.path.exists(f):
- continue
- basename = os.path.basename(f)
- # use the timestamp right from the ACA0 filename
- tstart = re.search(r'(\d+)', str(basename)).group(1)
- datestart = DateTime(tstart).date
- year, doy = re.search(r'(\d\d\d\d):(\d\d\d)', datestart).groups()
- archdir = os.path.abspath(os.path.join(data_root,
- year,
- doy))
- # construct the destination filepath/name
- archfile = os.path.abspath(os.path.join(archdir, basename))
- if not os.path.exists(archdir):
- os.makedirs(archdir)
- if not os.path.exists(archfile):
- logger.debug('mv %s %s' % (os.path.abspath(f), archfile))
- os.chmod(f, 0o775)
- shutil.move(f, archfile)
- if os.path.exists(f):
- logger.info('Unlinking %s' % os.path.abspath(f))
- os.unlink(f)
-
- def _fetch_by_time(self, range_tstart, range_tstop):
- logger.info("Fetching %s from %s to %s"
- % ('ACA L0 Data',
- DateTime(range_tstart).date,
- DateTime(range_tstop).date))
- archfiles = self._get_archive_files(DateTime(range_tstart),
- DateTime(range_tstop))
- return archfiles
-
- def _fetch_individual_files(self, files):
- arc5 = Ska.arc5gl.Arc5gl(echo=True)
- logger.info('********** %s %s **********'
- % (self.filetype['content'], time.ctime()))
- fetched_files = []
- ingest_dates = []
- # get the files, store in file archive, and record in database
- for file in files:
- # Retrieve CXC archive files in a temp directory with arc5gl
- missed_file = file['filename']
- arc5.sendline('dataset=flight')
- arc5.sendline('detector=pcad')
- arc5.sendline('subdetector=aca')
- arc5.sendline('level=0')
- arc5.sendline('filename=%s' % missed_file)
- arc5.sendline('version=last')
- arc5.sendline('operation=retrieve')
- arc5.sendline('go')
- have_files = sorted(glob(f"{missed_file}*"))
- filename = have_files[0]
- # if it isn't gzipped, just gzip it
- if re.match(r'.*\.fits$', filename):
- f_in = open(file, 'rb')
- f_out = gzip.open("%s.gz" % filename, 'wb')
- f_out.writelines(f_in)
- f_out.close()
- f_in.close()
- filename = "%s.gz" % filename
- fetched_files.append(filename)
- ingest_dates.append(file['ingest_date'])
- return fetched_files, ingest_dates
-
- def _insert_files(self, files):
- count_inserted = 0
- for i, f in enumerate(files):
- arch_info = self._read_archfile(i, f, files)
- if arch_info:
- self._move_archive_files([f])
- with ska_dbi.DBI(**self.db) as db:
- db.insert(arch_info, 'archfiles')
- db.commit()
- count_inserted += 1
- logger.info("Ingested %d files" % count_inserted)
-
- def update(self):
- """
- Retrieve ACA0 telemetry files from the CXC archive, store in the
- Ska/ACA archive, and update database of files.
- """
- contentdir = self.data_root
- if not os.path.exists(contentdir):
- os.makedirs(contentdir)
- if not os.path.exists(self.temp_root):
- os.makedirs(self.temp_root)
- archdb = os.path.join(contentdir, 'archfiles.db3')
- # if the database of the archived files does not exist,
- # or is empty, make it
- if not os.path.exists(archdb) or os.stat(archdb).st_size == 0:
- logger.info("creating archfiles db from %s"
- % self.sql_def)
- db_sql = Path(__file__).parent / self.sql_def
- db_init_cmds = open(db_sql).read()
- with ska_dbi.DBI(**self.db) as db:
- db.execute(db_init_cmds, commit=True)
- if self.start:
- datestart = DateTime(self.start)
- else:
- # Get datestart as the most-recent file time from archfiles table
- # will need min-of-max-slot-datestart
- with ska_dbi.DBI(**self.db) as db:
- last_time = min([db.fetchone(
- "select max(filetime) from archfiles where slot = %d"
- % s)['max(filetime)'] for s in range(0, 8)])
- if last_time is None:
- raise ValueError(
- "No files in archive to do update-since-last-run mode.\n"
- + "Please specify a time with --start")
- datestart = DateTime(last_time)
- datestop = DateTime(self.stop)
- padding_seconds = 10000
- # loop over the specified time range in chunks of
- # days_at_once in seconds with some padding
- for tstart in np.arange(datestart.day_start().secs,
- datestop.day_end().secs,
- self.days_at_once * 86400):
- # set times for a chunk
- range_tstart = tstart - padding_seconds
- range_tstop = tstart + self.days_at_once * 86400
- if range_tstop > datestop.day_end().secs:
- range_tstop = datestop.day_end().secs
- range_tstop += padding_seconds
- # make a temporary directory
- tmpdir = Ska.File.TempDir(dir=self.temp_root)
- dirname = tmpdir.name
- logger.debug("Files save to temp dir %s" % dirname)
- # get the files, store in file archive, and record in database
- with Ska.File.chdir(dirname):
- fetched_files = self._fetch_by_time(range_tstart, range_tstop)
- self._insert_files(fetched_files)
-
- timestamp_file = os.path.join(self.data_root, 'last_timestamp.txt')
- # get list of missing files since the last time the tool ingested
- # files. If this is first run of the tool, check from the start of
- # the requested time range
- if (os.path.exists(timestamp_file)
- and os.stat(timestamp_file).st_size > 0):
- cda_checked_timestamp = open(timestamp_file).read().rstrip()
- else:
- cda_checked_timestamp = DateTime(self.start).date
- missing_datetime = DateTime(cda_checked_timestamp)
- missing_files, last_ingest_date = \
- self._get_missing_archive_files(missing_datetime,
- only_new=True)
- # update the file to have up through the last confirmed good file
- # even before we try to fetch missing ones
- open(timestamp_file, 'w').write("%s" % last_ingest_date)
-
- if len(missing_files):
- logger.info("Found %d missing individual files"
- % len(missing_files))
- # make a temporary directory
- tmpdir = Ska.File.TempDir(dir=self.temp_root)
- dirname = tmpdir.name
- logger.info("File save to temp dir %s" % dirname)
- with Ska.File.chdir(dirname):
- fetched_files, ingest_times = \
- self._fetch_individual_files(missing_files)
- self._insert_files(fetched_files)
-
- last_ingest_date = missing_files[-1]['ingest_date']
- # update the file to have up through the last confirmed good file
- # even before we try to fetch missing ones
- open(timestamp_file, 'w').write("%s" % last_ingest_date)
- else:
- logger.info("No missing files")
-
-
-def main():
- """
- Command line interface to fetch ACA L0 telemetry from the CXC Archive
- and store it in the Ska archive.
- """
- opt = get_options()
- kwargs = vars(opt)
- updater = Updater(**kwargs)
- updater.update()
-
-if __name__ == '__main__':
- main()
-
Source code for mica.archive.asp_l1
-#!/usr/bin/env python
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-"""
-Script to update Ska file archive aspect L1 products. Module
-also provides methods to retrieve the directory (or directories)
-for an obsid.
-
-This uses the obsid_archive module with a configuration specific
-to the aspect L1 products.
-
-"""
-import os
-import logging
-import numpy as np
-from astropy.table import Table
-from Quaternion import Quat
-
-from mica.archive import obsid_archive
-from mica.archive import asp_l1_proc
-from mica.common import MICA_ARCHIVE
-
-# these columns are available in the headers of the fetched telemetry
-# for this product (ASP L1) and will be included in the file lookup table
-ARCHFILES_HDR_COLS = ('tstart', 'tstop', 'caldbver', 'content',
- 'ascdsver', 'revision', 'date')
-
-#config = ConfigObj('asp1.conf')
-CONFIG = dict(data_root=os.path.join(MICA_ARCHIVE, 'asp1'),
- temp_root=os.path.join(MICA_ARCHIVE, 'temp'),
- bad_obsids=os.path.join(MICA_ARCHIVE, 'asp1', 'asp_l1_bad_obsids.dat'),
- sql_def='archfiles_asp_l1_def.sql',
- apstat_table='aspect_1',
- apstat_id='aspect_1_id',
- label='asp_l1',
- small='asp1{fidprops}',
- small_glob='*fidpr*',
- small_ver_regex=r'pcadf\d+N(\d{3})_',
- delete=['kalm1', 'gdat', 'adat'],
- full='asp1',
- filecheck=False,
- cols=ARCHFILES_HDR_COLS,
- content_types=['ASPQUAL', 'ASPSOL', 'ACADATA', 'GSPROPS',
- 'GYRODATA', 'KALMAN', 'ACACAL', 'ACACENT',
- 'FIDPROPS', 'GYROCAL', 'ACA_BADPIX'])
-
-
-def get_options():
- import argparse
- desc = \
-"""
-Run the update process to get new ASP L1 telemetry, save it in the Ska
-file archive, and include it in the file lookup database. This is intended
-to be run as a cron task, and in regular processing, the update will fetch
-and ingest all telemetry since the task's last run. Options also provided
-to fetch and ingest specific obsids and versions.
-
-See the ``CONFIG`` in the asp_l1.py file and the config description in
-obsid_archive for more information on the asp l1 default config if parameters
-without command-line options need to be changed.
-"""
- parser = argparse.ArgumentParser(description=desc)
- defaults = dict(CONFIG)
- parser.set_defaults(**defaults)
- parser.add_argument("--obsid",
- type=int,
- help="specific obsid to process")
- parser.add_argument("--version",
- default='last',
- help="specific processing version to retrieve")
- parser.add_argument("--firstrun",
- action='store_true',
- help="for archive init., ignore rev in aspect_1 table")
- parser.add_argument("--data-root",
- help="parent directory for all data")
- parser.add_argument("--temp-root",
- help="parent temp directory")
- parser.add_argument("--filecheck",
- action="store_true",
- help="for provisional data, download files and check"
- + " that all are present. If unset, proceed if dir"
- + " exists")
- parser.add_argument("--rebuild",
- action="store_true",
- help="Allow update to rebuild archive from obsid 1")
- opt = parser.parse_args()
- return opt
-
-# set up an archive object with default config for use by the other
-# get_* methods
-archive = obsid_archive.ObsArchive(CONFIG)
-
-
-
-[docs]
-def get_dir(obsid):
- """
- Get ASP L1 directory for default/released products for an obsid.
-
- >>> from mica.archive import asp_l1
- >>> asp_l1.get_dir(2121)
- '/proj/sot/ska/data/mica/archive/asp1/02/02121'
-
- :param obsid: obsid
- :returns: directory
- :rtype: string
- """
- return archive.get_dir(obsid)
-
-
-
-
-[docs]
-def get_obs_dirs(obsid):
- """
- Get all ASP L1 directories for an obsid in the Ska file archive.
-
- >>> from mica.archive import asp_l1
- >>> obsdirs = asp_l1.get_obs_dirs(6000)
-
- obsdirs will look something like::
-
- {'default': '/proj/sot/ska/data/mica/archive/asp1/06/06000',
- 2: '/proj/sot/ska/data/mica/archive/asp1/06/06000_v02',
- 3: '/proj/sot/ska/data/mica/archive/asp1/06/06000_v03',
- 'last': '/proj/sot/ska/data/mica/archive/asp1/06/06000',
- 'revisions': [2, 3]}
-
- :param obsid: obsid
- :returns: map of obsid version to directories
- :rtype: dictionary
- """
- return archive.get_obs_dirs(obsid)
-
-
-
-
-[docs]
-def get_files(obsid=None, start=None, stop=None,
- revision=None, content=None):
- """
- List asp_l1 files for an obsid or a time range.
-
- >>> from mica.archive import asp_l1
- >>> obs_files = asp_l1.get_files(6000)
- >>> obs_gspr = asp_l1.get_files(6000, content=['GSPROPS'])
- >>> range_fidpr = asp_l1.get_files(start='2012:001',
- ... stop='2012:030',
- ... content=['FIDPROPS'])
-
-
- The available content types are: ASPQUAL, ASPSOL, ASPSOLOBI, ACACAL,
- ACA_BADPIX, FIDPROPS, GYROCAL, GSPROPS, and ACACENT.
-
- :param obsid: obsid
- :param start: time range start (Chandra.Time compatible)
- :param stop: time range stop (Chandra.Time compatible)
- :param revision: revision integer or 'last'
- defaults to current released version
- :param content: archive CONTENT type
- defaults to all available ASP1 types
- :returns: full path of files matching query
- """
- return archive.get_files(obsid=obsid, start=start, stop=stop,
- revision=revision, content=content)
-
-
-
-
-[docs]
-def get_atts(obsid=None, start=None, stop=None, revision=None, filter=True):
- """
- Get the ground aspect solution quaternions and times covering obsid or start to stop,
- in the ACA frame.
-
- :obsid: obsid
- :start: start time (DateTime compat)
- :stop: stop time (DateTime compat)
- :revision: aspect pipeline processing revision (integer version, None, or 'last')
- :filter: boolean, true means returned values will not include quaternions during times when asp_sol_status is non-zero
-
- :returns: Nx4 np.array of quaternions, np.array of N times, list of dict with header from each asol file.
- """
- if revision == 'all':
- raise ValueError("revision 'all' doesn't really make sense for this function")
- # These are in time order by default from get_files
- asol_files = get_files(obsid=obsid, start=start, stop=stop,
- revision=revision, content=['ASPSOL'])
- acal_files = get_files(obsid=obsid, start=start, stop=stop,
- revision=revision, content=['ACACAL'])
- aqual_files = get_files(obsid=obsid, start=start, stop=stop,
- revision=revision, content=['ASPQUAL'])
- return get_atts_from_files(asol_files, acal_files, aqual_files, filter=filter)
-
-
-
-
-[docs]
-def get_atts_from_files(asol_files, acal_files, aqual_files, filter=True):
- """
- From ASP1 source files (asol, acal, aqual) get the ground aspect solution quaternions and times covering
- the range of asol_files in the ACA frame. The asol, acl, and aqual files are assumed to have one-to-one correspondence
- (though the asol to acal times are checked).
-
- :asol_files: list of aspect asol1 files
- :acal_files: list of acal1 files associated with asol_files
- :aqual_files: list of aqual files associated with asol_files
- :filter: boolean, true means returned values will not include quaternions during times when asp_sol_status is non-zero
-
- :returns: Nx4 np.array of quaternions, np.array of N times, list of dict with header from each asol file.
- """
- # There should be one asol and one acal file for each aspect interval in the range
- att_chunks = []
- time_chunks = []
- records = []
- for asol_f, acal_f, aqual_f in zip(asol_files, acal_files, aqual_files):
- asol = Table.read(asol_f, hdu=1)
- acal = Table.read(acal_f, hdu=1)
- aqual = Table.read(aqual_f, hdu=1)
- # Check that the time ranges match from the fits headers (meta in the table)
- if not np.allclose(np.array([asol.meta['TSTART'], asol.meta['TSTOP']]),
- np.array([acal.meta['TSTART'], acal.meta['TSTOP']]),
- atol=10):
- raise ValueError("ACAL and ASOL have mismatched time ranges")
- if filter and np.any(aqual['asp_sol_status'] != 0):
- # For each sample with bad status find the overlapping time range in asol
- # and remove from set
- for idx in np.flatnonzero(aqual['asp_sol_status']):
- nok = ((asol['time'] >= (aqual['time'][idx] - 1.025))
- & (asol['time'] <= (aqual['time'][idx] + 1.025)))
- asol = asol[~nok]
-
- # Transpose of transform/rotation matrix is the inverse
- aca_mis_inv = acal['aca_misalign'][0].transpose()
-
- # Repeat the transform to match asol
- aca_mis_inv = np.repeat(aca_mis_inv[np.newaxis, ...], repeats=len(asol), axis=0)
- q_mis_inv = Quat(transform=aca_mis_inv)
-
- # Quaternion multiply the asol quats with that inv misalign and save
- q_att_name = 'q_att_raw' if 'q_att_raw' in asol.colnames else 'q_att'
- att_chunks.append((Quat(q=asol[q_att_name]) * q_mis_inv).q)
- time_chunks.append(np.array(asol['time']))
- records.append(asol.meta)
- if len(att_chunks) > 0 and len(time_chunks) > 0:
- return np.vstack(att_chunks), np.hstack(time_chunks), records
- else:
- return np.array([]), np.array([]), []
-
-
-
-def main():
- """
- Run the update process to get new ASP L1 telemetry, save it in the Ska
- file archive, and include it in the file lookup database.
- """
- opt = get_options()
- config = vars(opt)
- archive = obsid_archive.ObsArchive(config)
- archive.logger.setLevel(logging.INFO)
- archive.logger.addHandler(logging.StreamHandler())
- obsids = archive.update()
-
-if __name__ == '__main__':
- main()
-
Navigation
Navigation
Source code for mica.archive.obsid_archive
prov_data = self.get_todo_from_links(archive_dir)
for obs in prov_data:
# check again for multi-obis and limit to first one
- with ska_dbi.DBI(**apstat) as db:
+ with Sqsh(**apstat) as db:
obis = db.fetchall(
"select distinct obi from obidet_0_5 where obsid = %d"
% obs['obsid'])
diff --git a/docs/_modules/mica/archive/obspar.html b/docs/_modules/mica/archive/obspar.html
index 8d1384fa..4319aad7 100644
--- a/docs/_modules/mica/archive/obspar.html
+++ b/docs/_modules/mica/archive/obspar.html
@@ -4,11 +4,11 @@
- mica.archive.obspar — mica 4.35.1 documentation
+ mica.archive.obspar — mica 4.35.2 documentation
-
+
@@ -44,7 +44,7 @@
Navigation
Navigation
Navigation
Navigation
Navigation
Navigation
Source code for mica.vv.process
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-from __future__ import division
-
-import os
-import tempfile
-import json
-import tables
-import shutil
-import logging
-from glob import glob
-import numpy as np
-
-import ska_dbi
-
-import mica.archive.asp_l1 as asp_l1_arch
-import mica.archive.obspar as obspar_arch
-from mica.archive import obsid_archive
-from .core import Obi, VV_VERSION
-from .vv import FILES
-
-
-VV_DTYPE = np.dtype(
- [('obsid', '<i4'),
- ('revision', '<i4'),
- ('isdefault', '<i4'),
- ('aspect_1_id', '<i4'),
- ('used', '<i4'), ('vv_version', '<i4'),
- ('ap_date', '|S21'),
- ('tstart', '<f8'), ('tstop', '<f8'),
- ('sim_z', '<f8'), ('sim_z_offset', '<f8'), ('instrument', '|S10'),
- ('ra_pnt', '<f8'), ('dec_pnt', '<f8'), ('roll_pnt', '<f8'),
- ('slot', '<i4'), ('type', '|S10'),
- ('n_pts', '<i4'), ('rad_off', '<f8'),
- ('frac_dy_big', '<f8'), ('frac_dz_big', '<f8'), ('frac_mag_big', '<f8'),
- ('mean_y', '<f8'), ('mean_z', '<f8'),
- ('dy_mean', '<f8'), ('dy_med', '<f8'), ('dy_rms', '<f8'),
- ('dz_mean', '<f8'), ('dz_med', '<f8'), ('dz_rms', '<f8'),
- ('dr_mean', '<f8'), ('dr_med', '<f8'), ('dr_rms', '<f8'),
- ('mag_mean', '<f8'), ('mag_med', '<f8'), ('mag_rms', '<f8'),
- ('mean_aacccdpt', '<f8')])
-
-
-KNOWN_BAD_OBSIDS = []
-if os.path.exists(FILES['bad_obsid_list']):
- KNOWN_BAD_OBSIDS = json.loads(open(FILES['bad_obsid_list']).read())
-
-logger = logging.getLogger('vv')
-
-
-def _file_vv(obi):
- """
- Save processed V&V data to per-obsid archive
- """
- obsid = int(obi.info()['obsid'])
- version = int(obi.info()['revision'])
- # set up directory for data
- strobs = "%05d_v%02d" % (obsid, version)
- chunk_dir = strobs[0:2]
- chunk_dir_path = os.path.join(FILES['data_root'], chunk_dir)
- obs_dir = os.path.join(chunk_dir_path, strobs)
- if not os.path.exists(obs_dir):
- logger.info("making directory %s" % obs_dir)
- os.makedirs(obs_dir)
- else:
- logger.info("obsid dir %s already exists" % obs_dir)
- for f in glob(os.path.join(obi.tempdir, "*")):
- os.chmod(f, 0o775)
- shutil.copy(f, obs_dir)
- os.remove(f)
- logger.info("moved VV files to {}".format(obs_dir))
- os.removedirs(obi.tempdir)
- logger.info("removed directory {}".format(obi.tempdir))
- # make any desired link
- obs_ln = os.path.join(FILES['data_root'], chunk_dir, "%05d" % obsid)
- obs_ln_last = os.path.join(
- FILES['data_root'], chunk_dir, "%05d_last" % obsid)
- obsdirs = asp_l1_arch.get_obs_dirs(obsid)
- isdefault = 0
- if 'default' in obsdirs:
- if (os.path.realpath(obsdirs[version])
- == os.path.realpath(obsdirs['default'])):
- if os.path.islink(obs_ln):
- os.unlink(obs_ln)
- os.symlink(os.path.relpath(obs_dir, chunk_dir_path), obs_ln)
- isdefault = 1
- if 'last' in obsdirs:
- if ('default' in obsdirs
- and (os.path.realpath(obsdirs['last'])
- != os.path.realpath(obsdirs['default']))
- or 'default' not in obsdirs):
- if (os.path.realpath(obsdirs[version])
- == os.path.realpath(obsdirs['last'])):
- if os.path.islink(obs_ln_last):
- os.unlink(obs_ln_last)
- os.symlink(os.path.relpath(obs_dir, chunk_dir_path),
- obs_ln_last)
- if ('default' in obsdirs
- and (os.path.realpath(obsdirs['last'])
- == os.path.realpath(obsdirs['default']))):
- if os.path.exists(obs_ln_last):
- os.unlink(obs_ln_last)
- obi.isdefault = isdefault
-
-
-def update(obsids=[]):
- """
- For a list of obsids or for all new obsids, run V&V processing
- and save V&V info to archive.
-
- :param obsids: optional list of obsids
- """
- if len(obsids) == 0:
-
- # If no obsid specified, run on all with vv_complete = 0 in
- # the local processing database.
- with ska_dbi.DBI(dbi='sqlite', server=FILES['asp1_proc_table']) as db:
- obsids = db.fetchall(
- """SELECT * FROM aspect_1_proc
- where vv_complete = 0
- order by aspect_1_id""")['obsid']
- for obsid in obsids:
- with ska_dbi.DBI(dbi='sqlite', server=FILES['asp1_proc_table']) as db:
- proc = db.fetchall(
- f"SELECT obsid, revision, ap_date FROM aspect_1_proc where obsid = {obsid}")
- for obs in proc:
- if obsid in KNOWN_BAD_OBSIDS:
- logger.info(f"Skipping known bad obsid {obsid}")
- continue
- logger.info(f"running VV for obsid {obsid} run on {obs['ap_date']}")
- try:
- process(obsid, version=obs['revision'])
- except LookupError:
- logger.warn(
- f"Skipping obs:ver {obsid}:{obs['revision']}. Missing data")
- continue
- update_str = (f"""UPDATE aspect_1_proc set vv_complete = {VV_VERSION}
- where obsid = {obsid} and revision = {obs['revision']}""")
-
- logger.info(update_str)
- with ska_dbi.DBI(dbi='sqlite', server=FILES['asp1_proc_table']) as db:
- db.execute(update_str)
-
-
-
-[docs]
-def get_arch_vv(obsid, version='last'):
- """
- Given obsid and version, find archived ASP1 and obspar products and
- run V&V. Effort is made to find the obspar that was actually used during
- creation of the ASP1 products.
-
- :param obsid: obsid
- :param version: 'last', 'default', or revision number of ASP1 products
- :returns: mica.vv.Obi V&V object
- """
- logger.info("Generating V&V for obsid {}".format(obsid))
- asp_l1_dirs = asp_l1_arch.get_obs_dirs(obsid)
- if asp_l1_dirs is None or version not in asp_l1_dirs:
- raise LookupError("Requested version {} not in asp_l1 archive".format(version))
- l1_dir = asp_l1_dirs[version]
- # find the obspar that matches the requested aspect_1 products
- # this is in the aspect processing table
- asp_l1_proc = ska_dbi.DBI(dbi="sqlite", server=FILES['asp1_proc_table'])
- asp_obs = asp_l1_proc.fetchall(
- "SELECT * FROM aspect_1_proc where obsid = {}".format(
- obsid))
- asp_proc = None
- if len(asp_obs) == 0:
- return None
- if version == 'last':
- asp_proc = asp_obs[asp_obs['aspect_1_id']
- == np.max(asp_obs['aspect_1_id'])][0]
- if version == 'default':
- asp_proc = asp_obs[asp_obs['isdefault'] == 1][0]
- if asp_proc is None:
- asp_proc = asp_obs[asp_obs['revision'] == version][0]
- obspar_dirs = obspar_arch.get_obs_dirs(obsid)
- if obspar_dirs is None or asp_proc['obspar_version'] not in obspar_dirs:
- # try to update the obspar archive with the missing version
- config = obspar_arch.CONFIG.copy()
- config.update(dict(obsid=obsid, version=asp_proc['obspar_version']))
- oa = obsid_archive.ObsArchive(config)
- oa.logger.setLevel(logging.INFO)
- oa.logger.addHandler(logging.StreamHandler())
- oa.update()
- obspar_dirs = obspar_arch.get_obs_dirs(obsid)
- try:
- obspar_file = glob(os.path.join(obspar_dirs[asp_proc['obspar_version']],
- 'axaf*par*'))[0]
- except IndexError:
- raise LookupError(f"Requested version {version} not in obspar archive")
- return Obi(obspar_file, l1_dir, temproot=FILES['temp_root'])
-
-
-
-def process(obsid, version='last'):
- """
- For requested obsid/version, run V&V, make plots,
- save plots and JSON, save info to shelve file, and
- update RMS HDF5.
-
- :param obsid: obsid
- :param version: 'last', 'default' or revision number of ASP1 products
- :returns: mica.vv.Obi V&V object
- """
- obi = get_arch_vv(obsid, version)
- if not os.path.exists(FILES['temp_root']):
- os.makedirs(FILES['temp_root'])
- if obi is None:
- return None
- obi.tempdir = tempfile.mkdtemp(dir=FILES['temp_root'])
- obi.save_plots_and_resid()
- _file_vv(obi)
- if not os.path.exists(FILES['h5_file']):
- vv_desc, byteorder = tables.descr_from_dtype(VV_DTYPE)
- filters = tables.Filters(complevel=5, complib='zlib')
- h5f = tables.open_file(FILES['h5_file'], 'a')
- tbl = h5f.create_table('/', 'vv', vv_desc, filters=filters, expectedrows=1e6)
- h5f.close()
- h5 = tables.open_file(FILES['h5_file'], 'a')
- tbl = h5.get_node('/', 'vv')
- obi.set_tbl(tbl)
- obi.slots_to_table()
- tbl.flush()
- h5.flush()
- h5.close()
- return obi
-
Source code for mica.vv.vv
-# Licensed under a 3-clause BSD style license - see LICENSE.rst
-from __future__ import division
-
-import os
-import json
-import tables
-import logging
-from glob import glob
-import numpy as np
-
-import ska_dbi
-
-from mica.common import MICA_ARCHIVE
-
-FILES = dict(
- data_root=os.path.join(MICA_ARCHIVE, 'vv'),
- temp_root=os.path.join(MICA_ARCHIVE, 'tempvv'),
- shelf_file=os.path.join(MICA_ARCHIVE, 'vv', 'vv_shelf'),
- h5_file=os.path.join(MICA_ARCHIVE, 'vv', 'vv.h5'),
- last_file=os.path.join(MICA_ARCHIVE, 'vv', 'last_id.txt'),
- asp1_proc_table=os.path.join(MICA_ARCHIVE, 'asp1', 'processing_asp_l1.db3'),
- bad_obsid_list=os.path.join(MICA_ARCHIVE, 'vv', 'bad_obsids.json'))
-
-
-logger = logging.getLogger('vv')
-logger.setLevel(logging.INFO)
-logger.addHandler(logging.StreamHandler())
-
-
-
-[docs]
-def get_vv_dir(obsid, version="default"):
- """
- Get directory containing V&V products for a requested obsid/version,
- including plots and json.
-
- :param obsid: obsid
- :param version: 'last', 'default' or version number
- :returns: directory name for obsid/version
- """
- num_version = None
- if version == 'last' or version == 'default':
- asp_l1_proc = ska_dbi.DBI(dbi="sqlite", server=FILES['asp1_proc_table'])
- if version == 'default':
- obs = asp_l1_proc.fetchall("""select * from aspect_1_proc
- where obsid = {} and isdefault = 1
- """.format(obsid))
- if not len(obs):
- raise LookupError("Version {} not found for obsid {}".format(
- version, obsid))
- num_version = obs['revision'][0]
- if version == 'last':
- obs = asp_l1_proc.fetchall("""select * from aspect_1_proc
- where obsid = {}
- """.format(obsid))
- if not len(obs):
- raise LookupError("No entries found for obsid {}".format(
- obsid))
- num_version = np.max(obs['revision'])
- else:
- num_version = version
- strobs = "%05d_v%02d" % (obsid, num_version)
- chunk_dir = strobs[0:2]
- chunk_dir_path = os.path.join(FILES['data_root'], chunk_dir)
- obs_dir = os.path.join(chunk_dir_path, strobs)
- if not os.path.exists(obs_dir):
- raise LookupError("Expected vv archive dir {} not found".format(obs_dir))
- return obs_dir
-
-
-
-
-[docs]
-def get_vv_files(obsid, version="default"):
- """
- Get list of V&V files available for a requested obsid/version.
-
- :param obsid: obsid
- :param version: 'default', 'last' or version number
- :returns: list of files
- """
- vv_dir = get_vv_dir(obsid, version)
- return glob(os.path.join(vv_dir, "*"))
-
-
-
-
-[docs]
-def get_vv(obsid, version="default"):
- """
- Retrieve V&V data for an obsid/version.
- This reads the saved JSON and returns the previously-
- calculated V&V data.
-
- :param obsid: obsid
- :param version: 'last', 'default', or version number
- :returns: dict of V&V data
- """
- vv_dir = get_vv_dir(obsid, version)
- json_file = glob(os.path.join(vv_dir, "*.json"))[0]
- return json.loads(open(json_file).read())
-
-
-
-
-[docs]
-def get_rms_data():
- """
- Retrieve/return all data from RMS trending H5 archive
-
- :returns: numpy array of RMS data for each star/obsid/version
- """
- tables_open_file = getattr(tables, 'open_file', None) or tables.openFile
- with tables_open_file(FILES['h5_file'], 'r') as h5f:
- data = h5f.root.vv[:]
- return data
-
-
-
Navigation
|ACA dark current
Dark current calibrations¶
-The mica.archive.aca_dark.dark_cal
module provides functions for retrieving
+
The mica.archive.aca_dark.dark_cal
module provides functions for retrieving
data for the ACA full-frame dark current calibrations which occur about four
times per year (see the ACA dark calibrations TWiki page).
The functions available are documented in the mica.archive.aca_dark section, but the most useful are:
As an example, let’s plot the raw and corrected warm pixel fraction over the mission. The correction in this case is done to a reference temperature of -15 C:
diff --git a/docs/aca_diagnostic_telemetry.html b/docs/aca_diagnostic_telemetry.html
index 60b0bce8..20d8ea79 100644
--- a/docs/aca_diagnostic_telemetry.html
+++ b/docs/aca_diagnostic_telemetry.html
@@ -5,11 +5,11 @@
- ACA diagnostic telemetry — mica 4.35.1 documentation
+ ACA diagnostic telemetry — mica 4.35.2 documentation
-
+
@@ -58,7 +58,7 @@ Navigation
|
Navigation
ACA diagnostic telemetry¶
-The mica.archive.aca_hdr3
module works with Header 3 data
+
The mica.archive.aca_hdr3
module works with Header 3 data
(extended ACA diagnostic telemetry) available in 8x8 ACA
L0 image data. The module provies an MSID class and MSIDset class to fetch
these data as “pseudo-MSIDs” and return masked array data structures.
diff --git a/docs/aca_l0_telemetry.html b/docs/aca_l0_telemetry.html
index dde97bc2..2cfdfcb4 100644
--- a/docs/aca_l0_telemetry.html
+++ b/docs/aca_l0_telemetry.html
@@ -5,11 +5,11 @@
-
ACA L0 telemetry — mica 4.35.1 documentation
+ ACA L0 telemetry — mica 4.35.2 documentation
-
+
@@ -58,7 +58,7 @@ Navigation
|
Navigation
ACA L0 telemetry¶
-The mica.archive.aca_l0
module provides tools to build and fetch from
+
The mica.archive.aca_l0
module provides tools to build and fetch from
a file archive of ACA L0 telemetry. This telemetry is stored in
directories by year and day-of-year, and ingested filenames are stored
in a lookup table.
@@ -110,7 +110,7 @@ Get_slot_data()get_slot_data()
, as it places the values in a
+get_slot_data()
, as it places the values in a
masked array (masking, for example, TEMPCCD when in 4x4 mode or when
the data is just not available).
@@ -126,7 +126,7 @@ Get_slot_data()get_slot_data()
method will retrieve
+
The get_slot_data()
method will retrieve
all columns by default and the resulting data structure, as mentioned,
will have masked columns where those values are not available
(i.e. HD3TLM64 in 6x6 or 4x4 image data). See ACA L0 MSIDs/columns
@@ -159,7 +159,7 @@
Get_slot_data()
Get_l0_images()¶
An alternate way to access ACA L0 image is via the
-get_l0_images()
function. This returns a Python list of
+get_l0_images()
function. This returns a Python list of
ACAImage
objects (see the chandra_aca.aca_image
docs for details). Each of these
objects contains the image along with relevant meta-data for each readout:: ['TIME',
'IMGROW0', 'IMGCOL0', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'INTEG']
.
diff --git a/docs/acq_stats.html b/docs/acq_stats.html
index 0deb6b2f..a6e81382 100644
--- a/docs/acq_stats.html
+++ b/docs/acq_stats.html
@@ -5,11 +5,11 @@
- Acquisition star statistics — mica 4.35.1 documentation
+ Acquisition star statistics — mica 4.35.2 documentation
-
+
@@ -58,7 +58,7 @@ Navigation
|
mica.archive.aca_dark — mica 4.35.1 documentation
+ mica.archive.aca_dark — mica 4.35.2 documentation
-
+
@@ -52,7 +52,7 @@ Navigation
Navigation
mica.archive.aca_dark
¶
-
-mica.archive.aca_dark.dark_cal
¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+mica.archive.aca_dark.dark_cal
¶
-
-mica.archive.aca_hdr3
¶
-Experimental/alpha code to work with ACA L0 Header 3 data
-
-
-
-
-
-
+
+mica.archive.aca_hdr3
¶
-
-mica.archive.aca_l0
¶
+
+mica.archive.aca_l0
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-mica.archive.asp_l1
¶
-Script to update Ska file archive aspect L1 products. Module
-also provides methods to retrieve the directory (or directories)
-for an obsid.
-This uses the obsid_archive module with a configuration specific
-to the aspect L1 products.
+
+mica.archive.asp_l1
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1166,94 +698,10 @@ Functions¶
-
-mica.vv
¶
+
+mica.vv
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1465,45 +913,16 @@ Functions¶<
mica.archive.aca_dark.dark_cal
module provides functions for retrieving
+mica.archive.aca_dark.dark_cal
module provides functions for retrieving
data for the ACA full-frame dark current calibrations which occur about four
times per year (see the ACA dark calibrations TWiki page).ACA diagnostic telemetry¶
-The mica.archive.aca_hdr3
module works with Header 3 data
+
The mica.archive.aca_hdr3
module works with Header 3 data
(extended ACA diagnostic telemetry) available in 8x8 ACA
L0 image data. The module provies an MSID class and MSIDset class to fetch
these data as “pseudo-MSIDs” and return masked array data structures.
diff --git a/docs/aca_l0_telemetry.html b/docs/aca_l0_telemetry.html
index dde97bc2..2cfdfcb4 100644
--- a/docs/aca_l0_telemetry.html
+++ b/docs/aca_l0_telemetry.html
@@ -5,11 +5,11 @@
-
Navigation
|Navigation
ACA L0 telemetry¶
-The mica.archive.aca_l0
module provides tools to build and fetch from
+
The mica.archive.aca_l0
module provides tools to build and fetch from
a file archive of ACA L0 telemetry. This telemetry is stored in
directories by year and day-of-year, and ingested filenames are stored
in a lookup table.
Get_slot_data()get_slot_data()
, as it places the values in a
+get_slot_data()
, as it places the values in a
masked array (masking, for example, TEMPCCD when in 4x4 mode or when
the data is just not available).
@@ -126,7 +126,7 @@ Get_slot_data()get_slot_data()
method will retrieve
+
The get_slot_data()
method will retrieve
all columns by default and the resulting data structure, as mentioned,
will have masked columns where those values are not available
(i.e. HD3TLM64 in 6x6 or 4x4 image data). See ACA L0 MSIDs/columns
@@ -159,7 +159,7 @@
Get_slot_data()
Get_l0_images()¶
An alternate way to access ACA L0 image is via the
-get_l0_images()
function. This returns a Python list of
+get_l0_images()
function. This returns a Python list of
ACAImage
objects (see the chandra_aca.aca_image
docs for details). Each of these
objects contains the image along with relevant meta-data for each readout:: ['TIME',
'IMGROW0', 'IMGCOL0', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'INTEG']
.
diff --git a/docs/acq_stats.html b/docs/acq_stats.html
index 0deb6b2f..a6e81382 100644
--- a/docs/acq_stats.html
+++ b/docs/acq_stats.html
@@ -5,11 +5,11 @@
- Acquisition star statistics — mica 4.35.1 documentation
+ Acquisition star statistics — mica 4.35.2 documentation
-
+
@@ -58,7 +58,7 @@ Navigation
|
mica.archive.aca_dark — mica 4.35.1 documentation
+ mica.archive.aca_dark — mica 4.35.2 documentation
-
+
@@ -52,7 +52,7 @@ Navigation
Navigation
mica.archive.aca_dark
¶
-
-mica.archive.aca_dark.dark_cal
¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+mica.archive.aca_dark.dark_cal
¶
-
-mica.archive.aca_hdr3
¶
-Experimental/alpha code to work with ACA L0 Header 3 data
-
-
-
-
-
-
+
+mica.archive.aca_hdr3
¶
-
-mica.archive.aca_l0
¶
+
+mica.archive.aca_l0
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-mica.archive.asp_l1
¶
-Script to update Ska file archive aspect L1 products. Module
-also provides methods to retrieve the directory (or directories)
-for an obsid.
-This uses the obsid_archive module with a configuration specific
-to the aspect L1 products.
+
+mica.archive.asp_l1
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1166,94 +698,10 @@ Functions¶
-
-mica.vv
¶
+
+mica.vv
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1465,45 +913,16 @@ Functions¶<
Get_slot_data()get_slot_data()
method will retrieve
+
The get_slot_data()
method will retrieve
all columns by default and the resulting data structure, as mentioned,
will have masked columns where those values are not available
(i.e. HD3TLM64 in 6x6 or 4x4 image data). See ACA L0 MSIDs/columns
@@ -159,7 +159,7 @@
Get_slot_data()
Get_l0_images()¶
An alternate way to access ACA L0 image is via the
-get_l0_images()
function. This returns a Python list of
+get_l0_images()
function. This returns a Python list of
ACAImage
objects (see the chandra_aca.aca_image
docs for details). Each of these
objects contains the image along with relevant meta-data for each readout:: ['TIME',
'IMGROW0', 'IMGCOL0', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'INTEG']
.
diff --git a/docs/acq_stats.html b/docs/acq_stats.html
index 0deb6b2f..a6e81382 100644
--- a/docs/acq_stats.html
+++ b/docs/acq_stats.html
@@ -5,11 +5,11 @@
- Acquisition star statistics — mica 4.35.1 documentation
+ Acquisition star statistics — mica 4.35.2 documentation
-
+
@@ -58,7 +58,7 @@ Navigation
|
mica.archive.aca_dark — mica 4.35.1 documentation
+ mica.archive.aca_dark — mica 4.35.2 documentation
-
+
@@ -52,7 +52,7 @@ Navigation
Navigation
mica.archive.aca_dark
¶
-
-mica.archive.aca_dark.dark_cal
¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+mica.archive.aca_dark.dark_cal
¶
-
-mica.archive.aca_hdr3
¶
-Experimental/alpha code to work with ACA L0 Header 3 data
-
-
-
-
-
-
+
+mica.archive.aca_hdr3
¶
-
-mica.archive.aca_l0
¶
+
+mica.archive.aca_l0
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-mica.archive.asp_l1
¶
-Script to update Ska file archive aspect L1 products. Module
-also provides methods to retrieve the directory (or directories)
-for an obsid.
-This uses the obsid_archive module with a configuration specific
-to the aspect L1 products.
+
+mica.archive.asp_l1
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1166,94 +698,10 @@ Functions¶
-
-mica.vv
¶
+
+mica.vv
¶
Functions¶
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
@@ -1465,45 +913,16 @@ Functions¶<
get_l0_images()
function. This returns a Python list of
+get_l0_images()
function. This returns a Python list of
ACAImage
objects (see the chandra_aca.aca_image
docs for details). Each of these
objects contains the image along with relevant meta-data for each readout:: ['TIME',
'IMGROW0', 'IMGCOL0', 'BGDAVG', 'IMGSTAT', 'IMGFUNC1', 'IMGSIZE', 'INTEG']
.mica.archive.aca_dark
¶
-mica.archive.aca_dark.dark_cal
¶
--
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
mica.archive.aca_dark.dark_cal
¶
mica.archive.aca_hdr3
¶
-Experimental/alpha code to work with ACA L0 Header 3 data
--
-
-
-
mica.archive.aca_hdr3
¶
mica.archive.aca_l0
¶
+mica.archive.aca_l0
¶
Functions¶
--
-
-
-
-
-
mica.archive.asp_l1
¶
-Script to update Ska file archive aspect L1 products. Module -also provides methods to retrieve the directory (or directories) -for an obsid.
-This uses the obsid_archive module with a configuration specific -to the aspect L1 products.
+mica.archive.asp_l1
¶
Functions¶
--
-
-
-
-
-
-
-
-
-
Functions¶
mica.vv
¶
+mica.vv
¶
Functions¶
--
-
-
-
-
-
-
-
-
-