code
stringlengths
51
2.38k
docstring
stringlengths
4
15.2k
def unset_sentry_context(self, tag): if self.sentry_client: self.sentry_client.tags.pop(tag, None)
Remove a context tag from sentry :param tag: The context tag to remove :type tag: :class:`str`
def _query_helper(self, by=None): if by is None: primary_keys = self.table.primary_key.columns.keys() if len(primary_keys) > 1: warnings.warn("WARNING: MORE THAN 1 PRIMARY KEY FOR TABLE %s. " "USING THE FIRST KEY %s." % (self.table.name, primary_keys[0])) if not primary_keys: raise NoPrimaryKeyException("Table %s needs a primary key for" "the .last() method to work properly. " "Alternatively, specify an ORDER BY " "column with the by= argument. " % self.table.name) id_col = primary_keys[0] else: id_col = by if self.column is None: col = "*" else: col = self.column.name return col, id_col
Internal helper for preparing queries.
def add(self, logical_id, deployment_preference_dict): if logical_id in self._resource_preferences: raise ValueError("logical_id {logical_id} previously added to this deployment_preference_collection".format( logical_id=logical_id)) self._resource_preferences[logical_id] = DeploymentPreference.from_dict(logical_id, deployment_preference_dict)
Add this deployment preference to the collection :raise ValueError if an existing logical id already exists in the _resource_preferences :param logical_id: logical id of the resource where this deployment preference applies :param deployment_preference_dict: the input SAM template deployment preference mapping
def _merge_many_to_one_field_from_fkey(self, main_infos, prop, result): if prop.columns[0].foreign_keys and prop.key.endswith('_id'): rel_name = prop.key[0:-3] for val in result: if val["name"] == rel_name: val["label"] = main_infos['label'] main_infos = None break return main_infos
Find the relationship associated with this fkey and set the title :param dict main_infos: The already collected datas about this column :param obj prop: The property mapper of the relationship :param list result: The actual collected headers :returns: a main_infos dict or None
def get_ldict_keys(ldict, flatten_keys=False, **kwargs): result = [] for ddict in ldict: if isinstance(ddict, dict): if flatten_keys: ddict = flatten(ddict, **kwargs) result.extend(ddict.keys()) return list(set(result))
Get first level keys from a list of dicts
def qnwgamma(n, a=1.0, b=1.0, tol=3e-14): return _make_multidim_func(_qnwgamma1, n, a, b, tol)
Computes nodes and weights for gamma distribution Parameters ---------- n : int or array_like(float) A length-d iterable of the number of nodes in each dimension a : scalar or array_like(float) : optional(default=ones(d)) Shape parameter of the gamma distribution parameter. Must be positive b : scalar or array_like(float) : optional(default=ones(d)) Scale parameter of the gamma distribution parameter. Must be positive tol : scalar or array_like(float) : optional(default=ones(d) * 3e-14) Tolerance parameter for newton iterations for each node Returns ------- nodes : np.ndarray(dtype=float) Quadrature nodes weights : np.ndarray(dtype=float) Weights for quadrature nodes Notes ----- Based of original function ``qnwgamma`` in CompEcon toolbox by Miranda and Fackler References ---------- Miranda, Mario J, and Paul L Fackler. Applied Computational Economics and Finance, MIT Press, 2002.
def permission_denied(request, template_name=None, extra_context=None): if template_name is None: template_name = ('403.html', 'authority/403.html') context = { 'request_path': request.path, } if extra_context: context.update(extra_context) return HttpResponseForbidden(loader.render_to_string( template_name=template_name, context=context, request=request, ))
Default 403 handler. Templates: `403.html` Context: request_path The path of the requested URL (e.g., '/app/pages/bad_page/')
def reset(self): if self._call_later_handler is not None: self._call_later_handler.cancel() self._call_later_handler = None self._wait_done_cb()
Reseting duration for throttling
def object_build_function(node, member, localname): args, varargs, varkw, defaults = inspect.getargspec(member) if varargs is not None: args.append(varargs) if varkw is not None: args.append(varkw) func = build_function( getattr(member, "__name__", None) or localname, args, defaults, member.__doc__ ) node.add_local_node(func, localname)
create astroid for a living function object
def send_frame(self, cmd, headers=None, body=''): frame = utils.Frame(cmd, headers, body) self.transport.transmit(frame)
Encode and send a stomp frame through the underlying transport. :param str cmd: the protocol command :param dict headers: a map of headers to include in the frame :param body: the content of the message
def consolidate_metadata(store, metadata_key='.zmetadata'): store = normalize_store_arg(store) def is_zarr_key(key): return (key.endswith('.zarray') or key.endswith('.zgroup') or key.endswith('.zattrs')) out = { 'zarr_consolidated_format': 1, 'metadata': { key: json_loads(store[key]) for key in store if is_zarr_key(key) } } store[metadata_key] = json_dumps(out) return open_consolidated(store, metadata_key=metadata_key)
Consolidate all metadata for groups and arrays within the given store into a single resource and put it under the given key. This produces a single object in the backend store, containing all the metadata read from all the zarr-related keys that can be found. After metadata have been consolidated, use :func:`open_consolidated` to open the root group in optimised, read-only mode, using the consolidated metadata to reduce the number of read operations on the backend store. Note, that if the metadata in the store is changed after this consolidation, then the metadata read by :func:`open_consolidated` would be incorrect unless this function is called again. .. note:: This is an experimental feature. Parameters ---------- store : MutableMapping or string Store or path to directory in file system or name of zip file. metadata_key : str Key to put the consolidated metadata under. Returns ------- g : :class:`zarr.hierarchy.Group` Group instance, opened with the new consolidated metadata. See Also -------- open_consolidated
def run(self, cmd): import __main__ main_dict = __main__.__dict__ return self.runctx(cmd, main_dict, main_dict)
Profile a single executable statement in the main namespace.
def _getOverlay(self, readDataInstance, sectionHdrsInstance): if readDataInstance is not None and sectionHdrsInstance is not None: try: offset = sectionHdrsInstance[-1].pointerToRawData.value + sectionHdrsInstance[-1].sizeOfRawData.value readDataInstance.setOffset(offset) except excep.WrongOffsetValueException: if self._verbose: print "It seems that the file has no overlay data." else: raise excep.InstanceErrorException("ReadData instance or SectionHeaders instance not specified.") return readDataInstance.data[readDataInstance.offset:]
Returns the overlay data from the PE file. @type readDataInstance: L{ReadData} @param readDataInstance: A L{ReadData} instance containing the PE file data. @type sectionHdrsInstance: L{SectionHeaders} @param sectionHdrsInstance: A L{SectionHeaders} instance containing the information about the sections present in the PE file. @rtype: str @return: A string with the overlay data from the PE file. @raise InstanceErrorException: If the C{readDataInstance} or the C{sectionHdrsInstance} were not specified.
def strip_empty_values(obj): if isinstance(obj, dict): new_obj = {} for key, val in obj.items(): new_val = strip_empty_values(val) if new_val is not None: new_obj[key] = new_val return new_obj or None elif isinstance(obj, (list, tuple, set)): new_obj = [] for val in obj: new_val = strip_empty_values(val) if new_val is not None: new_obj.append(new_val) return type(obj)(new_obj) or None elif obj or obj is False or obj == 0: return obj else: return None
Recursively strips empty values.
def update_from(self, res_list): for res in res_list: name = res.properties.get(self._manager._name_prop, None) uri = res.properties.get(self._manager._uri_prop, None) self.update(name, uri)
Update the Name-URI cache from the provided resource list. This is done by going through the resource list and updating any cache entries for non-empty resource names in that list. Other cache entries remain unchanged.
def OnDoubleClick(self, event): node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition()) if node: wx.PostEvent( self, SquareActivationEvent( node=node, point=event.GetPosition(), map=self ) )
Double click on a given square in the map
def batch_run(self, *commands): original_retries = self.repeat_commands self.repeat_commands = 1 for _ in range(original_retries): for command in commands: cmd = command[0] args = command[1:] cmd(*args) self.repeat_commands = original_retries
Run batch of commands in sequence. Input is positional arguments with (function pointer, *args) tuples. This method is useful for executing commands to multiple groups with retries, without having too long delays. For example, - Set group 1 to red and brightness to 10% - Set group 2 to red and brightness to 10% - Set group 3 to white and brightness to 100% - Turn off group 4 With three repeats, running these consecutively takes approximately 100ms * 13 commands * 3 times = 3.9 seconds. With batch_run, execution takes same time, but first loop - each command is sent once to every group - is finished within 1.3 seconds. After that, each command is repeated two times. Most of the time, this ensures slightly faster changes for each group. Usage: led.batch_run((led.set_color, "red", 1), (led.set_brightness, 10, 1), (led.set_color, "white", 3), ...)
def delete_network(self, network): n_res = MechResource(network['id'], a_const.NETWORK_RESOURCE, a_const.DELETE) self.provision_queue.put(n_res)
Enqueue network delete
def _rearrange_output_for_package(self, target_workdir, java_package): package_dir_rel = java_package.replace('.', os.path.sep) package_dir = os.path.join(target_workdir, package_dir_rel) safe_mkdir(package_dir) for root, dirs, files in safe_walk(target_workdir): if root == package_dir_rel: continue for f in files: os.rename( os.path.join(root, f), os.path.join(package_dir, f) ) for root, dirs, files in safe_walk(target_workdir, topdown = False): for d in dirs: full_dir = os.path.join(root, d) if not os.listdir(full_dir): os.rmdir(full_dir)
Rearrange the output files to match a standard Java structure. Antlr emits a directory structure based on the relative path provided for the grammar file. If the source root of the file is different from the Pants build root, then the Java files end up with undesired parent directories.
def date_string_to_date(p_date): result = None if p_date: parsed_date = re.match(r'(\d{4})-(\d{2})-(\d{2})', p_date) if parsed_date: result = date( int(parsed_date.group(1)), int(parsed_date.group(2)), int(parsed_date.group(3)) ) else: raise ValueError return result
Given a date in YYYY-MM-DD, returns a Python date object. Throws a ValueError if the date is invalid.
def RemoveWifiConnection(self, dev_path, connection_path): dev_obj = dbusmock.get_object(dev_path) settings_obj = dbusmock.get_object(SETTINGS_OBJ) connections = dev_obj.Get(DEVICE_IFACE, 'AvailableConnections') main_connections = settings_obj.ListConnections() if connection_path not in connections and connection_path not in main_connections: return connections.remove(dbus.ObjectPath(connection_path)) dev_obj.Set(DEVICE_IFACE, 'AvailableConnections', connections) main_connections.remove(connection_path) settings_obj.Set(SETTINGS_IFACE, 'Connections', main_connections) settings_obj.EmitSignal(SETTINGS_IFACE, 'ConnectionRemoved', 'o', [connection_path]) connection_obj = dbusmock.get_object(connection_path) connection_obj.EmitSignal(CSETTINGS_IFACE, 'Removed', '', []) self.object_manager_emit_removed(connection_path) self.RemoveObject(connection_path)
Remove the specified WiFi connection. You have to specify the device to remove the connection from, and the path of the Connection. Please note that this does not set any global properties.
def lookup(ctx, path): regions = parse_intervals(path, as_context=ctx.obj['semantic']) _report_from_regions(regions, ctx.obj)
Determine which tests intersect a source interval.
def _GetPathSegmentSeparator(self, path): if path.startswith('\\') or path[1:].startswith(':\\'): return '\\' if path.startswith('/'): return '/' if '/' and '\\' in path: forward_count = len(path.split('/')) backward_count = len(path.split('\\')) if forward_count > backward_count: return '/' return '\\' if '/' in path: return '/' return '\\'
Given a path give back the path separator as a best guess. Args: path (str): path. Returns: str: path segment separator.
def round_sf(number, digits): units = None try: num = number.magnitude units = number.units except AttributeError: num = number try: if (units != None): rounded_num = round(num, digits - int(floor(log10(abs(num)))) - 1) * units else: rounded_num = round(num, digits - int(floor(log10(abs(num)))) - 1) return rounded_num except ValueError: if (units != None): return 0 * units else: return 0
Returns inputted value rounded to number of significant figures desired. :param number: Value to be rounded :type number: float :param digits: number of significant digits to be rounded to. :type digits: int
def _pseudodepths_wenner(configs, spacing=1, grid=None): if grid is None: xpositions = (configs - 1) * spacing else: xpositions = grid.get_electrode_positions()[configs - 1, 0] z = np.abs(np.max(xpositions, axis=1) - np.min(xpositions, axis=1)) * -0.11 x = np.mean(xpositions, axis=1) return x, z
Given distances between electrodes, compute Wenner pseudo depths for the provided configuration The pseudodepth is computed after Roy & Apparao, 1971, as 0.11 times the distance between the two outermost electrodes. It's not really clear why the Wenner depths are different from the Dipole-Dipole depths, given the fact that Wenner configurations are a complete subset of the Dipole-Dipole configurations.
def user_object( element_name, cls, child_processors, required=True, alias=None, hooks=None ): converter = _user_object_converter(cls) processor = _Aggregate(element_name, converter, child_processors, required, alias) return _processor_wrap_if_hooks(processor, hooks)
Create a processor for user objects. :param cls: Class object with a no-argument constructor or other callable no-argument object. See also :func:`declxml.dictionary`
def _get_client(): client = salt.cloud.CloudClient( os.path.join(os.path.dirname(__opts__['conf_file']), 'cloud'), pillars=copy.deepcopy(__pillar__.get('cloud', {})) ) return client
Return a cloud client
def toc_directive(self, maxdepth=1): articles_directive_content = TC.toc.render( maxdepth=maxdepth, article_list=self.sub_article_folders, ) return articles_directive_content
Generate toctree directive text. :param table_of_content_header: :param header_bar_char: :param header_line_length: :param maxdepth: :return:
def rethreshold(self, new_threshold, new_threshold_type='MAD'): for family in self.families: rethresh_detections = [] for d in family.detections: if new_threshold_type == 'MAD' and d.threshold_type == 'MAD': new_thresh = (d.threshold / d.threshold_input) * new_threshold elif new_threshold_type == 'MAD' and d.threshold_type != 'MAD': raise MatchFilterError( 'Cannot recalculate MAD level, ' 'use another threshold type') elif new_threshold_type == 'absolute': new_thresh = new_threshold elif new_threshold_type == 'av_chan_corr': new_thresh = new_threshold * d.no_chans else: raise MatchFilterError( 'new_threshold_type %s is not recognised' % str(new_threshold_type)) if d.detect_val >= new_thresh: d.threshold = new_thresh d.threshold_input = new_threshold d.threshold_type = new_threshold_type rethresh_detections.append(d) family.detections = rethresh_detections return self
Remove detections from the Party that are below a new threshold. .. Note:: threshold can only be set higher. .. Warning:: Works in place on Party. :type new_threshold: float :param new_threshold: New threshold level :type new_threshold_type: str :param new_threshold_type: Either 'MAD', 'absolute' or 'av_chan_corr' .. rubric:: Examples Using the MAD threshold on detections made using the MAD threshold: >>> party = Party().read() >>> len(party) 4 >>> party = party.rethreshold(10.0) >>> len(party) 4 >>> # Note that all detections are self detections Using the absolute thresholding method on the same Party: >>> party = Party().read().rethreshold(6.0, 'absolute') >>> len(party) 1 Using the av_chan_corr method on the same Party: >>> party = Party().read().rethreshold(0.9, 'av_chan_corr') >>> len(party) 4
def _match_setters(self, query): q = query.decode('utf-8') for name, parser, response, error_response in self._setters: try: parsed = parser(q) logger.debug('Found response in setter of %s' % name) except ValueError: continue try: if isinstance(parsed, dict) and 'ch_id' in parsed: self._selected = parsed['ch_id'] self._properties[name].set_value(parsed['0']) else: self._properties[name].set_value(parsed) return response except ValueError: if isinstance(error_response, bytes): return error_response return self._device.error_response('command_error') return None
Try to find a match
def get_all_items_of_delivery_note(self, delivery_note_id): return self._iterate_through_pages( get_function=self.get_items_of_delivery_note_per_page, resource=DELIVERY_NOTE_ITEMS, **{'delivery_note_id': delivery_note_id} )
Get all items of delivery note This will iterate over all pages until it gets all elements. So if the rate limit exceeded it will throw an Exception and you will get nothing :param delivery_note_id: the delivery note id :return: list
def one_to_many(df, unitcol, manycol): subset = df[[manycol, unitcol]].drop_duplicates() for many in subset[manycol].unique(): if subset[subset[manycol] == many].shape[0] > 1: msg = "{} in {} has multiple values for {}".format(many, manycol, unitcol) raise AssertionError(msg) return df
Assert that a many-to-one relationship is preserved between two columns. For example, a retail store will have have distinct departments, each with several employees. If each employee may only work in a single department, then the relationship of the department to the employees is one to many. Parameters ========== df : DataFrame unitcol : str The column that encapulates the groups in ``manycol``. manycol : str The column that must remain unique in the distict pairs between ``manycol`` and ``unitcol`` Returns ======= df : DataFrame
def render_generator(self, context, result): context.response.encoding = 'utf8' context.response.app_iter = ( (i.encode('utf8') if isinstance(i, unicode) else i) for i in result if i is not None ) return True
Attempt to serve generator responses through stream encoding. This allows for direct use of cinje template functions, which are generators, as returned views.
def debug(self): try: __import__('ipdb').post_mortem(self.traceback) except ImportError: __import__('pdb').post_mortem(self.traceback)
Launch a postmortem debug shell at the site of the error.
def get_sla_template_path(service_type=ServiceTypes.ASSET_ACCESS): if service_type == ServiceTypes.ASSET_ACCESS: name = 'access_sla_template.json' elif service_type == ServiceTypes.CLOUD_COMPUTE: name = 'compute_sla_template.json' elif service_type == ServiceTypes.FITCHAIN_COMPUTE: name = 'fitchain_sla_template.json' else: raise ValueError(f'Invalid/unsupported service agreement type {service_type}') return os.path.join(os.path.sep, *os.path.realpath(__file__).split(os.path.sep)[1:-1], name)
Get the template for a ServiceType. :param service_type: ServiceTypes :return: Path of the template, str
def convert(name): s1 = re.sub('(.)([A-Z][a-z]+)', r'\1_\2', name) return re.sub('([a-z0-9])([A-Z])', r'\1_\2', s1).lower()
Convert CamelCase to underscore Parameters ---------- name : str Camelcase string Returns ------- name : str Converted name
def get_task(self, id, client=None): client = self._require_client(client) task = Task(taskqueue=self, id=id) try: response = client.connection.api_request(method='GET', path=task.path, _target_object=task) task._set_properties(response) return task except NotFound: return None
Gets a named task from taskqueue If the task isn't found (backend 404), raises a :class:`gcloud.exceptions.NotFound`. :type id: string :param id: A task name to get :type client: :class:`gcloud.taskqueue.client.Client` or ``NoneType`` :param client: Optional. The client to use. If not passed, falls back to the ``client`` stored on the current taskqueue. :rtype: :class:`_Task`. :returns: a task :raises: :class:`gcloud.exceptions.NotFound`
def _parse_hparams(hparams): prefixes = ["agent_", "optimizer_", "runner_", "replay_buffer_"] ret = [] for prefix in prefixes: ret_dict = {} for key in hparams.values(): if prefix in key: par_name = key[len(prefix):] ret_dict[par_name] = hparams.get(key) ret.append(ret_dict) return ret
Split hparams, based on key prefixes. Args: hparams: hyperparameters Returns: Tuple of hparams for respectably: agent, optimizer, runner, replay_buffer.
def opt_strip(prefix, opts): ret = {} for opt_name, opt_value in opts.items(): if opt_name.startswith(prefix): opt_name = opt_name[len(prefix):] ret[opt_name] = opt_value return ret
Given a dict of opts that start with prefix, remove the prefix from each of them.
def get_settings(self, link): return reverse( 'servicesettings-detail', kwargs={'uuid': link.service.settings.uuid}, request=self.context['request'])
URL of service settings
def get_pubmed_record(pmid): handle = Entrez.esummary(db="pubmed", id=pmid) record = Entrez.read(handle) return record
Get PubMed record from PubMed ID.
def prep(config=None, path=None): if config is None: config = parse() if path is None: path = os.getcwd() root = config.get('root', 'path') root = os.path.join(path, root) root = os.path.realpath(root) os.environ['SCIDASH_HOME'] = root if sys.path[0] != root: sys.path.insert(0, root)
Prepare to read the configuration information.
def link_property(prop, cls_object): register = False cls_name = cls_object.__name__ if cls_name and cls_name != 'RdfBaseClass': new_name = "%s_%s" % (prop._prop_name, cls_name) else: new_name = prop._prop_name new_prop = types.new_class(new_name, (prop,), {'metaclass': RdfLinkedPropertyMeta, 'cls_name': cls_name, 'prop_name': prop._prop_name, 'linked_cls': cls_object}) return new_prop
Generates a property class linked to the rdfclass args: prop: unlinked property class cls_name: the name of the rdf_class with which the property is associated cls_object: the rdf_class
def __hammingDistance(s1, s2): l1 = len(s1) l2 = len(s2) if l1 != l2: raise ValueError("Hamming distance requires strings of same size.") return sum(ch1 != ch2 for ch1, ch2 in zip(s1, s2))
Finds the Hamming distance between two strings. @param s1: string @param s2: string @return: the distance @raise ValueError: if the lenght of the strings differ
def records(self): compounds = ModelList() seen_labels = set() tagged_tokens = [(CONTROL_RE.sub('', token), tag) for token, tag in self.tagged_tokens] for parser in self.parsers: for record in parser.parse(tagged_tokens): p = record.serialize() if not p: continue if record in compounds: continue if all(k in {'labels', 'roles'} for k in p.keys()) and set(record.labels).issubset(seen_labels): continue seen_labels.update(record.labels) compounds.append(record) return compounds
Return a list of records for this sentence.
def dim(self, dim): contrast = 0 if not dim: if self._vccstate == SSD1306_EXTERNALVCC: contrast = 0x9F else: contrast = 0xCF
Adjusts contrast to dim the display if dim is True, otherwise sets the contrast to normal brightness if dim is False.
def components(self): with self._mutex: if not self._components: self._components = [c for c in self.children if c.is_component] return self._components
The list of components in this manager, if any. This information can also be found by listing the children of this node that are of type @ref Component. That method is more useful as it returns the tree entries for the components.
def extend_request_args(self, args, item_cls, item_type, key, parameters, orig=False): try: item = self.get_item(item_cls, item_type, key) except KeyError: pass else: for parameter in parameters: if orig: try: args[parameter] = item[parameter] except KeyError: pass else: try: args[parameter] = item[verified_claim_name(parameter)] except KeyError: try: args[parameter] = item[parameter] except KeyError: pass return args
Add a set of parameters and their value to a set of request arguments. :param args: A dictionary :param item_cls: The :py:class:`oidcmsg.message.Message` subclass that describes the item :param item_type: The type of item, this is one of the parameter names in the :py:class:`oidcservice.state_interface.State` class. :param key: The key to the information in the database :param parameters: A list of parameters who's values this method will return. :param orig: Where the value of a claim is a signed JWT return that. :return: A dictionary with keys from the list of parameters and values being the values of those parameters in the item. If the parameter does not a appear in the item it will not appear in the returned dictionary.
def ip2long(ip): if not validate_ip(ip): return None quads = ip.split('.') if len(quads) == 1: quads = quads + [0, 0, 0] elif len(quads) < 4: host = quads[-1:] quads = quads[:-1] + [0, ] * (4 - len(quads)) + host lngip = 0 for q in quads: lngip = (lngip << 8) | int(q) return lngip
Convert a dotted-quad ip address to a network byte order 32-bit integer. >>> ip2long('127.0.0.1') 2130706433 >>> ip2long('127.1') 2130706433 >>> ip2long('127') 2130706432 >>> ip2long('127.0.0.256') is None True :param ip: Dotted-quad ip address (eg. '127.0.0.1'). :type ip: str :returns: Network byte order 32-bit integer or ``None`` if ip is invalid.
def build_index(self, idx_name, _type='default'): "Build the index related to the `name`." indexes = {} has_non_string_values = False for key, item in self.data.items(): if idx_name in item: value = item[idx_name] if not isinstance(value, six.string_types): has_non_string_values = True if value not in indexes: indexes[value] = set([]) indexes[value].add(key) self.indexes[idx_name] = indexes if self._meta.lazy_indexes or has_non_string_values: _type = 'lazy' self.index_defs[idx_name] = {'type': _type}
Build the index related to the `name`.
def __continue_session(self): now = time.time() diff = abs(now - self.last_request_time) timeout_sec = self.session_timeout * 60 if diff >= timeout_sec: self.__log('Session timed out, attempting to authenticate') self.authenticate()
Check if the time since the last HTTP request is under the session timeout limit. If it's been too long since the last request attempt to authenticate again.
def update(self, sequence): item_index = None try: for item in sequence: item_index = self.add(item) except TypeError: raise ValueError( "Argument needs to be an iterable, got %s" % type(sequence) ) return item_index
Update the set with the given iterable sequence, then return the index of the last element inserted. Example: >>> oset = OrderedSet([1, 2, 3]) >>> oset.update([3, 1, 5, 1, 4]) 4 >>> print(oset) OrderedSet([1, 2, 3, 5, 4])
def verify_calling_thread(self, should_be_emulation, message=None): if should_be_emulation == self._on_emulation_thread(): return if message is None: message = "Operation performed on invalid thread" raise InternalError(message)
Verify if the calling thread is or is not the emulation thread. This method can be called to make sure that an action is being taken in the appropriate context such as not blocking the event loop thread or modifying an emulate state outside of the event loop thread. If the verification fails an InternalError exception is raised, allowing this method to be used to protect other methods from being called in a context that could deadlock or cause race conditions. Args: should_be_emulation (bool): True if this call should be taking place on the emulation, thread, False if it must not take place on the emulation thread. message (str): Optional message to include when raising the exception. Otherwise a generic message is used. Raises: InternalError: When called from the wrong thread.
def _add_months(self, date, months): year = date.year + (date.month + months - 1) // 12 month = (date.month + months - 1) % 12 + 1 return datetime.date(year=year, month=month, day=1)
Add ``months`` months to ``date``. Unfortunately we can't use timedeltas to add months because timedelta counts in days and there's no foolproof way to add N months in days without counting the number of days per month.
def _build_raw_headers(self, headers: Dict) -> Tuple: raw_headers = [] for k, v in headers.items(): raw_headers.append((k.encode('utf8'), v.encode('utf8'))) return tuple(raw_headers)
Convert a dict of headers to a tuple of tuples Mimics the format of ClientResponse.
def register(self, obj): for method in dir(obj): if not method.startswith('_'): fct = getattr(obj, method) try: getattr(fct, '__call__') except AttributeError: pass else: logging.debug('JSONRPC: Found Method: "%s"' % method) self._methods[method] = { 'argspec': inspect.getargspec(fct), 'fct': fct }
register all methods for of an object as json rpc methods obj - object with methods
def service_reload(service_name, restart_on_failure=False, **kwargs): service_result = service('reload', service_name, **kwargs) if not service_result and restart_on_failure: service_result = service('restart', service_name, **kwargs) return service_result
Reload a system service, optionally falling back to restart if reload fails. The specified service name is managed via the system level init system. Some init systems (e.g. upstart) require that additional arguments be provided in order to directly control service instances whereas other init systems allow for addressing instances of a service directly by name (e.g. systemd). The kwargs allow for the additional parameters to be passed to underlying init systems for those systems which require/allow for them. For example, the ceph-osd upstart script requires the id parameter to be passed along in order to identify which running daemon should be reloaded. The follow- ing example restarts the ceph-osd service for instance id=4: service_reload('ceph-osd', id=4) :param service_name: the name of the service to reload :param restart_on_failure: boolean indicating whether to fallback to a restart if the reload fails. :param **kwargs: additional parameters to pass to the init system when managing services. These will be passed as key=value parameters to the init system's commandline. kwargs are ignored for init systems not allowing additional parameters via the commandline (systemd).
def get_min_sec_from_morning(self): mins = [] for timerange in self.timeranges: mins.append(timerange.get_sec_from_morning()) return min(mins)
Get the first second from midnight where a timerange is effective :return: smallest amount of second from midnight of all timerange :rtype: int
def pwm_max_score(self): if self.max_score is None: score = 0 for row in self.pwm: score += log(max(row) / 0.25 + 0.01) self.max_score = score return self.max_score
Return the maximum PWM score. Returns ------- score : float Maximum PWM score.
def get_area_def(self, dsid): msg = self._get_message(self._msg_datasets[dsid]) try: return self._area_def_from_msg(msg) except (RuntimeError, KeyError): raise RuntimeError("Unknown GRIB projection information")
Get area definition for message. If latlong grid then convert to valid eqc grid.
def Overlay(child, parent): for arg in child, parent: if not isinstance(arg, collections.Mapping): raise DefinitionError("Trying to merge badly defined hints. Child: %s, " "Parent: %s" % (type(child), type(parent))) for attr in ["fix", "format", "problem", "summary"]: if not child.get(attr): child[attr] = parent.get(attr, "").strip() return child
Adds hint attributes to a child hint if they are not defined.
def sort_by(self, fieldName, reverse=False): return self.__class__( sorted(self, key = lambda item : self._get_item_value(item, fieldName), reverse=reverse) )
sort_by - Return a copy of this collection, sorted by the given fieldName. The fieldName is accessed the same way as other filtering, so it supports custom properties, etc. @param fieldName <str> - The name of the field on which to sort by @param reverse <bool> Default False - If True, list will be in reverse order. @return <QueryableList> - A QueryableList of the same type with the elements sorted based on arguments.
def _build_calmar_data(self): assert self.initial_weight_name is not None data = pd.DataFrame() data[self.initial_weight_name] = self.initial_weight * self.filter_by for variable in self.margins_by_variable: if variable == 'total_population': continue assert variable in self.survey_scenario.tax_benefit_system.variables period = self.period data[variable] = self.survey_scenario.calculate_variable(variable = variable, period = period) return data
Builds the data dictionnary used as calmar input argument
def write(self, data): data_off = 0 while data_off < len(data): left = len(self._buf) - self._pos if left <= 0: self._write_packet(final=False) else: to_write = min(left, len(data) - data_off) self._buf[self._pos:self._pos + to_write] = data[data_off:data_off + to_write] self._pos += to_write data_off += to_write
Writes given bytes buffer into the stream Function returns only when entire buffer is written
def validate_protocol(protocol): if not re.match(PROTOCOL_REGEX, protocol): raise ValueError(f'invalid protocol: {protocol}') return protocol.lower()
Validate a protocol, a string, and return it.
def export_node(bpmn_graph, export_elements, node, nodes_classification, order=0, prefix="", condition="", who="", add_join=False): node_type = node[1][consts.Consts.type] if node_type == consts.Consts.start_event: return BpmnDiagramGraphCsvExport.export_start_event(bpmn_graph, export_elements, node, nodes_classification, order=order, prefix=prefix, condition=condition, who=who) elif node_type == consts.Consts.end_event: return BpmnDiagramGraphCsvExport.export_end_event(export_elements, node, order=order, prefix=prefix, condition=condition, who=who) else: return BpmnDiagramGraphCsvExport.export_element(bpmn_graph, export_elements, node, nodes_classification, order=order, prefix=prefix, condition=condition, who=who, add_join=add_join)
General method for node exporting :param bpmn_graph: an instance of BpmnDiagramGraph class, :param export_elements: a dictionary object. The key is a node ID, value is a dictionary of parameters that will be used in exported CSV document, :param node: networkx.Node object, :param nodes_classification: dictionary of classification labels. Key - node id. Value - a list of labels, :param order: the order param of exported node, :param prefix: the prefix of exported node - if the task appears after some gateway, the prefix will identify the branch :param condition: the condition param of exported node, :param who: the condition param of exported node, :param add_join: boolean flag. Used to indicate if "Join" element should be added to CSV. :return: None or the next node object if the exported node was a gateway join.
def _trim_buffer_garbage(rawmessage, debug=True): while rawmessage and rawmessage[0] != MESSAGE_START_CODE_0X02: if debug: _LOGGER.debug('Buffer content: %s', binascii.hexlify(rawmessage)) _LOGGER.debug('Trimming leading buffer garbage') rawmessage = rawmessage[1:] return rawmessage
Remove leading bytes from a byte stream. A proper message byte stream begins with 0x02.
def state(self, state): logger.debug('client changing to state=%s', ClientState.Names[state]) self._state = state
Change the state of the client. This is one of the values defined in ClientStates.
def _record_first_run(): info = {'pid': _get_shell_pid(), 'time': time.time()} mode = 'wb' if six.PY2 else 'w' with _get_not_configured_usage_tracker_path().open(mode) as tracker: json.dump(info, tracker)
Records shell pid to tracker file.
def strain_in_plane(self, **kwargs): if self._strain_out_of_plane is not None: return ((self._strain_out_of_plane / -2.) * (self.unstrained.c11(**kwargs) / self.unstrained.c12(**kwargs) ) ) else: return 1 - self.unstrained.a(**kwargs) / self.substrate.a(**kwargs)
Returns the in-plane strain assuming no lattice relaxation, which is positive for tensile strain and negative for compressive strain.
def packet_in_handler(self, evt): msg = evt.msg dpid = msg.datapath.id req_pkt = packet.Packet(msg.data) req_igmp = req_pkt.get_protocol(igmp.igmp) if req_igmp: if self._querier.dpid == dpid: self._querier.packet_in_handler(req_igmp, msg) else: self._snooper.packet_in_handler(req_pkt, req_igmp, msg) else: self.send_event_to_observers(EventPacketIn(msg))
PacketIn event handler. when the received packet was IGMP, proceed it. otherwise, send a event.
def xack(self, stream, group_name, id, *ids): return self.execute(b'XACK', stream, group_name, id, *ids)
Acknowledge a message for a given consumer group
def asset_asset_swap( self, asset1_id, asset1_transfer_spec, asset2_id, asset2_transfer_spec, fees): btc_transfer_spec = TransferParameters( asset1_transfer_spec.unspent_outputs, asset1_transfer_spec.to_script, asset1_transfer_spec.change_script, 0) return self.transfer( [(asset1_id, asset1_transfer_spec), (asset2_id, asset2_transfer_spec)], btc_transfer_spec, fees)
Creates a transaction for swapping an asset for another asset. :param bytes asset1_id: The ID of the first asset. :param TransferParameters asset1_transfer_spec: The parameters of the first asset being transferred. It is also used for paying fees and/or receiving change if any. :param bytes asset2_id: The ID of the second asset. :param TransferDetails asset2_transfer_spec: The parameters of the second asset being transferred. :param int fees: The fees to include in the transaction. :return: The resulting unsigned transaction. :rtype: CTransaction
def resolve(self, space_id=None, environment_id=None): proxy_method = getattr( self._client, base_path_for(self.link_type) ) if self.link_type == 'Space': return proxy_method().find(self.id) elif environment_id is not None: return proxy_method(space_id, environment_id).find(self.id) else: return proxy_method(space_id).find(self.id)
Resolves link to a specific resource.
def start(st_reg_number): weights = [9, 8, 7, 6, 5, 4, 3, 2] digit_state_registration = st_reg_number[-1] if len(st_reg_number) != 9: return False sum_total = 0 for i in range(0, 8): sum_total = sum_total + weights[i] * int(st_reg_number[i]) if sum_total % 11 == 0: return digit_state_registration[-1] == '0' digit_check = 11 - sum_total % 11 return str(digit_check) == digit_state_registration
Checks the number valiaty for the Paraiba state
def mul_table(self, other): other = coerceBigInt(other) if not other: return NotImplemented other %= orderG2() if not self._table: self._table = lwnafTable() librelic.ep2_mul_pre_lwnaf(byref(self._table), byref(self)) result = G2Element() librelic.ep2_mul_fix_lwnaf(byref(result), byref(self._table), byref(other)) return result
Fast multiplication using a the LWNAF precomputation table.
def __get_node_by_name(self, name): try: for entry in filter(lambda x: x.name == name, self.nodes()): return entry except StopIteration: raise ValueError("Attempted to retrieve a non-existing tree node with name: {name}" "".format(name=name))
Returns a first TreeNode object, which name matches the specified argument :raises: ValueError (if no node with specified name is present in the tree)
def quote_edge(identifier): node, _, rest = identifier.partition(':') parts = [quote(node)] if rest: port, _, compass = rest.partition(':') parts.append(quote(port)) if compass: parts.append(compass) return ':'.join(parts)
Return DOT edge statement node_id from string, quote if needed. >>> quote_edge('spam') 'spam' >>> quote_edge('spam spam:eggs eggs') '"spam spam":"eggs eggs"' >>> quote_edge('spam:eggs:s') 'spam:eggs:s'
def text_search(self, search, *, limit=0, table='assets'): return backend.query.text_search(self.connection, search, limit=limit, table=table)
Return an iterator of assets that match the text search Args: search (str): Text search string to query the text index limit (int, optional): Limit the number of returned documents. Returns: iter: An iterator of assets that match the text search.
def _bind_length_handlers(tids, user_handler, lns): for tid in tids: for ln in lns: type_octet = _gen_type_octet(tid, ln) ion_type = _TID_VALUE_TYPE_TABLE[tid] if ln == 1 and ion_type is IonType.STRUCT: handler = partial(_ordered_struct_start_handler, partial(user_handler, ion_type)) elif ln < _LENGTH_FIELD_FOLLOWS: handler = partial(user_handler, ion_type, ln) else: handler = partial(_var_uint_field_handler, partial(user_handler, ion_type)) _HANDLER_DISPATCH_TABLE[type_octet] = handler
Binds a set of handlers with the given factory. Args: tids (Sequence[int]): The Type IDs to bind to. user_handler (Callable): A function that takes as its parameters :class:`IonType`, ``length``, and the ``ctx`` context returning a co-routine. lns (Sequence[int]): The low-nibble lengths to bind to.
def main(): parser = argparse.ArgumentParser( description='Relocate a virtual environment.' ) parser.add_argument( '--source', help='The existing virtual environment.', required=True, ) parser.add_argument( '--destination', help='The location for which to configure the virtual environment.', required=True, ) parser.add_argument( '--move', help='Move the virtual environment to the destination.', default=False, action='store_true', ) args = parser.parse_args() relocate(args.source, args.destination, args.move)
Relocate a virtual environment.
def resized(self, dl, targ, new_path, resume = True, fn=None): return dl.dataset.resize_imgs(targ, new_path, resume=resume, fn=fn) if dl else None
Return a copy of this dataset resized
def run_config_diagnostics(config_path=CONFIG_PATH): config = read_config(config_path) missing_sections = set() malformed_entries = defaultdict(set) for section, expected_section_keys in SECTION_KEYS.items(): section_content = config.get(section) if not section_content: missing_sections.add(section) else: for option in expected_section_keys: option_value = section_content.get(option) if not option_value: malformed_entries[section].add(option) return config_path, missing_sections, malformed_entries
Run diagnostics on the configuration file. Args: config_path (str): Path to the configuration file. Returns: str, Set[str], dict(str, Set[str]): The path to the configuration file, a set of missing sections and a dict that maps each section to the entries that have either missing or empty options.
def _HasExpectedLineLength(self, file_object): original_file_position = file_object.tell() line_reader = self._CreateLineReader(file_object) for _ in range(0, 20): sample_line = line_reader.readline(self._maximum_line_length + 1) if len(sample_line) > self._maximum_line_length: file_object.seek(original_file_position) return False file_object.seek(original_file_position) return True
Determines if a file begins with lines of the expected length. As we know the maximum length of valid lines in the DSV file, the presence of lines longer than this indicates that the file will not be parsed successfully, without reading excessive data from a large file. Args: file_object (dfvfs.FileIO): file-like object. Returns: bool: True if the file has lines of the expected length.
def load(stream=None): if stream: loads(stream.read()) else: data = pkgutil.get_data(insights.__name__, _filename) return loads(data) if data else None
Loads filters from a stream, normally an open file. If one is not passed, filters are loaded from a default location within the project.
def open_in_browser(file_location): if not os.path.isfile(file_location): file_location = os.path.join(os.getcwd(), file_location) if not os.path.isfile(file_location): raise IOError("\n\nFile not found.") if sys.platform == "darwin": file_location = "file:///"+file_location new = 2 webbrowser.get().open(file_location, new=new)
Attempt to open file located at file_location in the default web browser.
def make_auth_headers(self, content_type): headers = self.make_headers(content_type) headers['Authorization'] = 'Basic {}'.format(self.get_auth_string()) return headers
Add authorization header.
def _syllabifyPhones(phoneList, syllableList): numPhoneList = [len(syllable) for syllable in syllableList] start = 0 syllabifiedList = [] for end in numPhoneList: syllable = phoneList[start:start + end] syllabifiedList.append(syllable) start += end return syllabifiedList
Given a phone list and a syllable list, syllabify the phones Typically used by findBestSyllabification which first aligns the phoneList with a dictionary phoneList and then uses the dictionary syllabification to syllabify the input phoneList.
def get_sortobj(self, goea_results, **kws): nts_goea = MgrNtGOEAs(goea_results).get_goea_nts_prt(**kws) goids = set(nt.GO for nt in nts_goea) go2nt = {nt.GO:nt for nt in nts_goea} grprobj = Grouper("GOEA", goids, self.hdrobj, self.grprdflt.gosubdag, go2nt=go2nt) grprobj.prt_summary(sys.stdout) sortobj = Sorter(grprobj, section_sortby=lambda nt: getattr(nt, self.pval_fld)) return sortobj
Return a Grouper object, given a list of GOEnrichmentRecord.
def config_(name: str, local: bool, package: str, section: str, key: Optional[str]): cfg = config.read_configs(package, name, local=local) if key: with suppress(NoOptionError, NoSectionError): echo(cfg.get(section, key)) else: with suppress(NoSectionError): for opt in cfg.options(section): colourise.pinfo(opt) echo(' {}'.format(cfg.get(section, opt)))
Extract or list values from config.
def migrate_config_file( self, config_file_path, always_update=False, current_file_type=None, output_file_name=None, output_file_type=None, create=True, update_defaults=True, dump_kwargs=None, include_bootstrap=True, ): current_file_type = current_file_type or self._file_type output_file_type = output_file_type or self._file_type output_file_name = output_file_name or config_file_path current_config = self._get_config_if_exists(config_file_path, create, current_file_type) migrated_config = {} if include_bootstrap: items = self._yapconf_items.values() else: items = [ item for item in self._yapconf_items.values() if not item.bootstrap ] for item in items: item.migrate_config(current_config, migrated_config, always_update, update_defaults) if create: yapconf.dump_data(migrated_config, filename=output_file_name, file_type=output_file_type, klazz=YapconfLoadError, dump_kwargs=dump_kwargs) return Box(migrated_config)
Migrates a configuration file. This is used to help you update your configurations throughout the lifetime of your application. It is probably best explained through example. Examples: Assume we have a JSON config file ('/path/to/config.json') like the following: ``{"db_name": "test_db_name", "db_host": "1.2.3.4"}`` >>> spec = YapconfSpec({ ... 'db_name': { ... 'type': 'str', ... 'default': 'new_default', ... 'previous_defaults': ['test_db_name'] ... }, ... 'db_host': { ... 'type': 'str', ... 'previous_defaults': ['localhost'] ... } ... }) We can migrate that file quite easily with the spec object: >>> spec.migrate_config_file('/path/to/config.json') Will result in /path/to/config.json being overwritten: ``{"db_name": "new_default", "db_host": "1.2.3.4"}`` Args: config_file_path (str): The path to your current config always_update (bool): Always update values (even to None) current_file_type (str): Defaults to self._file_type output_file_name (str): Defaults to the current_file_path output_file_type (str): Defaults to self._file_type create (bool): Create the file if it doesn't exist (otherwise error if the file does not exist). update_defaults (bool): Update values that have a value set to something listed in the previous_defaults dump_kwargs (dict): A key-value pair that will be passed to dump include_bootstrap (bool): Include bootstrap items in the output Returns: box.Box: The newly migrated configuration.
def _gerrit_user_to_author(props, username="unknown"): username = props.get("username", username) username = props.get("name", username) if "email" in props: username += " <%(email)s>" % props return username
Convert Gerrit account properties to Buildbot format Take into account missing values
def update(self, title, key): json = None if title and key: data = {'title': title, 'key': key} json = self._json(self._patch(self._api, data=dumps(data)), 200) if json: self._update_(json) return True return False
Update this key. :param str title: (required), title of the key :param str key: (required), text of the key file :returns: bool
def remove_path(path): if path is None or not os.path.exists(path): return if platform.system() == 'Windows': os.chmod(path, stat.S_IWRITE) try: if os.path.isdir(path): shutil.rmtree(path) elif os.path.isfile(path): shutil.os.remove(path) except OSError: logger.exception("Could not remove path: %s" % path)
remove path from file system If path is None - do nothing
def get_all_available_leaves(self, language=None, forbidden_item_ids=None): return self.get_all_leaves(language=language, forbidden_item_ids=forbidden_item_ids)
Get all available leaves.
def _sanitize_usecols(usecols): if usecols is None: return None try: pats = usecols.split(',') pats = [p.strip() for p in pats if p] except AttributeError: usecols = [int(c) for c in usecols] usecols.sort() return tuple(usecols) cols = [] for pat in pats: if ':' in pat: c1, c2 = pat.split(':') n1 = letter2num(c1, zbase=True) n2 = letter2num(c2, zbase=False) cols += range(n1, n2) else: cols += [letter2num(pat, zbase=True)] cols = list(set(cols)) cols.sort() return tuple(cols)
Make a tuple of sorted integers and return it. Return None if usecols is None
def and_evaluator(conditions, leaf_evaluator): saw_null_result = False for condition in conditions: result = evaluate(condition, leaf_evaluator) if result is False: return False if result is None: saw_null_result = True return None if saw_null_result else True
Evaluates a list of conditions as if the evaluator had been applied to each entry and the results AND-ed together. Args: conditions: List of conditions ex: [operand_1, operand_2]. leaf_evaluator: Function which will be called to evaluate leaf condition values. Returns: Boolean: - True if all operands evaluate to True. - False if a single operand evaluates to False. None: if conditions couldn't be evaluated.
def update_machine_state(state_path): charmhelpers.contrib.templating.contexts.juju_state_to_yaml( salt_grains_path) subprocess.check_call([ 'salt-call', '--local', 'state.template', state_path, ])
Update the machine state using the provided state declaration.
def create_new_account(data_dir, password, **geth_kwargs): if os.path.exists(password): geth_kwargs['password'] = password command, proc = spawn_geth(dict( data_dir=data_dir, suffix_args=['account', 'new'], **geth_kwargs )) if os.path.exists(password): stdoutdata, stderrdata = proc.communicate() else: stdoutdata, stderrdata = proc.communicate(b"\n".join((password, password))) if proc.returncode: raise ValueError(format_error_message( "Error trying to create a new account", command, proc.returncode, stdoutdata, stderrdata, )) match = account_regex.search(stdoutdata) if not match: raise ValueError(format_error_message( "Did not find an address in process output", command, proc.returncode, stdoutdata, stderrdata, )) return b'0x' + match.groups()[0]
Creates a new Ethereum account on geth. This is useful for testing when you want to stress interaction (transfers) between Ethereum accounts. This command communicates with ``geth`` command over terminal interaction. It creates keystore folder and new account there. This function only works against offline geth processes, because geth builds an account cache when starting up. If geth process is already running you can create new accounts using `web3.personal.newAccount() <https://github.com/ethereum/go-ethereum/wiki/JavaScript-Console#personalnewaccount>_` RPC API. Example py.test fixture for tests: .. code-block:: python import os from geth.wrapper import DEFAULT_PASSWORD_PATH from geth.accounts import create_new_account @pytest.fixture def target_account() -> str: '''Create a new Ethereum account on a running Geth node. The account can be used as a withdrawal target for tests. :return: 0x address of the account ''' # We store keystore files in the current working directory # of the test run data_dir = os.getcwd() # Use the default password "this-is-not-a-secure-password" # as supplied in geth/default_blockchain_password file. # The supplied password must be bytes, not string, # as we only want ASCII characters and do not want to # deal encoding problems with passwords account = create_new_account(data_dir, DEFAULT_PASSWORD_PATH) return account :param data_dir: Geth data fir path - where to keep "keystore" folder :param password: Path to a file containing the password for newly created account :param geth_kwargs: Extra command line arguments passwrord to geth :return: Account as 0x prefixed hex string
def _choose_rest_version(self): versions = self._list_available_rest_versions() versions = [LooseVersion(x) for x in versions if x in self.supported_rest_versions] if versions: return max(versions) else: raise PureError( "Library is incompatible with all REST API versions supported" "by the target array.")
Return the newest REST API version supported by target array.