code string | signature string | docstring string | loss_without_docstring float64 | loss_with_docstring float64 | factor float64 |
|---|---|---|---|---|---|
logger.debug("starting")
logger.debug(f"running step {self.module}")
self.run_step_function(context)
logger.debug(f"step {self.module} done") | def invoke_step(self, context) | Invoke 'run_step' in the dynamically loaded step module.
Don't invoke this from outside the Step class. Use
pypyr.dsl.Step.run_step instead.
invoke_step just does the bare module step invocation, it does not
evaluate any of the decorator logic surrounding the step. So unless
you... | 5.5954 | 4.975055 | 1.124691 |
logger.debug("starting")
# The decorator attributes might contain formatting expressions that
# change whether they evaluate True or False, thus apply formatting at
# last possible instant.
run_me = context.get_formatted_as_type(self.run_me, out_type=bool)
skip_... | def run_conditional_decorators(self, context) | Evaluate the step decorators to decide whether to run step or not.
Use pypyr.dsl.Step.run_step if you intend on executing the step the
same way pypyr does.
Args:
context: (pypyr.context.Context) The pypyr context. This arg will
mutate. | 3.958787 | 3.79723 | 1.042546 |
logger.debug("starting")
# friendly reminder [] list obj (i.e empty) evals False
if self.foreach_items:
self.foreach_loop(context)
else:
# since no looping required, don't pollute output with looping info
self.run_conditional_decorators(contex... | def run_foreach_or_conditional(self, context) | Run the foreach sequence or the conditional evaluation.
Args:
context: (pypyr.context.Context) The pypyr context. This arg will
mutate. | 20.901461 | 17.451704 | 1.197674 |
logger.debug("starting")
# the in params should be added to context before step execution.
self.set_step_input_context(context)
if self.while_decorator:
self.while_decorator.while_loop(context,
self.run_foreach_or_conditio... | def run_step(self, context) | Run a single pipeline step.
Args:
context: (pypyr.context.Context) The pypyr context. This arg will
mutate. | 8.015883 | 7.446445 | 1.076471 |
logger.debug("starting")
if self.in_parameters is not None:
parameter_count = len(self.in_parameters)
if parameter_count > 0:
logger.debug(
f"Updating context with {parameter_count} 'in' "
"parameters.")
... | def set_step_input_context(self, context) | Append step's 'in' parameters to context, if they exist.
Append the[in] dictionary to the context. This will overwrite
existing values if the same keys are already in there. I.e if
in_parameters has {'eggs': 'boiled'} and key 'eggs' already
exists in context, context['eggs'] hereafter w... | 4.206799 | 3.264617 | 1.288604 |
logger.debug("starting")
context['retryCounter'] = counter
logger.info(f"retry: running step with counter {counter}")
try:
step_method(context)
result = True
except Exception as ex_info:
if self.max:
if counter == self... | def exec_iteration(self, counter, context, step_method) | Run a single retry iteration.
This method abides by the signature invoked by poll.while_until_true,
which is to say (counter, *args, **kwargs). In a normal execution
chain, this method's args passed by self.retry_loop where context
and step_method set. while_until_true injects counter a... | 3.720022 | 3.477485 | 1.069745 |
logger.debug("starting")
context['retryCounter'] = 0
sleep = context.get_formatted_as_type(self.sleep, out_type=float)
if self.max:
max = context.get_formatted_as_type(self.max, out_type=int)
logger.info(f"retry decorator will try {max} times at {sleep... | def retry_loop(self, context, step_method) | Run step inside a retry loop.
Args:
context: (pypyr.context.Context) The pypyr context. This arg will
mutate - after method execution will contain the new
updated context.
step_method: (method/function) This is the method/function that
... | 7.688169 | 7.264225 | 1.05836 |
logger.debug("starting")
context['whileCounter'] = counter
logger.info(f"while: running step with counter {counter}")
step_method(context)
logger.debug(f"while: done step {counter}")
result = False
# if no stop, just iterating to max)
if self.st... | def exec_iteration(self, counter, context, step_method) | Run a single loop iteration.
This method abides by the signature invoked by poll.while_until_true,
which is to say (counter, *args, **kwargs). In a normal execution
chain, this method's args passed by self.while_loop where context
and step_method set. while_until_true injects counter as... | 10.139438 | 8.543422 | 1.186812 |
logger.debug("starting")
context['whileCounter'] = 0
if self.stop is None and self.max is None:
# the ctor already does this check, but guess theoretically
# consumer could have messed with the props since ctor
logger.error(f"while decorator missing... | def while_loop(self, context, step_method) | Run step inside a while loop.
Args:
context: (pypyr.context.Context) The pypyr context. This arg will
mutate - after method execution will contain the new
updated context.
step_method: (method/function) This is the method/function that
... | 4.293459 | 4.122687 | 1.041422 |
logger.debug("started")
deprecated(context)
context.assert_key_has_value(key='fetchYaml', caller=__name__)
fetch_yaml_input = context.get_formatted('fetchYaml')
if isinstance(fetch_yaml_input, str):
file_path = fetch_yaml_input
destination_key_expression = None
else:
... | def run_step(context) | Load a yaml file into the pypyr context.
Yaml parsed from the file will be merged into the pypyr context. This will
overwrite existing values if the same keys are already in there.
I.e if file yaml has {'eggs' : 'boiled'} and context {'eggs': 'fried'}
already exists, returned context['eggs'] will be 'b... | 4.177268 | 3.588708 | 1.164003 |
logger.debug("started")
deprecated(context)
ObjectRewriterStep(__name__, 'fileFormatYaml', context).run_step(
YamlRepresenter())
logger.debug("done") | def run_step(context) | Parse input yaml file and substitute {tokens} from context.
Loads yaml into memory to do parsing, so be aware of big files.
Args:
context: pypyr.context.Context. Mandatory.
- fileFormatYaml
- in. mandatory.
str, path-like, or an iterable (list/tuple) of
... | 33.268105 | 27.753796 | 1.198687 |
def decorator(f):
logger.debug("started")
def sleep_looper(*args, **kwargs):
logger.debug(f"Looping every {interval} seconds for "
f"{max_attempts} attempts")
for i in range(1, max_attempts + 1):
result = f(*args, **kwargs)
... | def wait_until_true(interval, max_attempts) | Decorator that executes a function until it returns True.
Executes wrapped function at every number of seconds specified by interval,
until wrapped function either returns True or max_attempts are exhausted,
whichever comes 1st. The wrapped function can have any given signature.
Use me if you always w... | 3.225763 | 3.484255 | 0.925811 |
def decorator(f):
logger.debug("started")
def sleep_looper(*args, **kwargs):
if max_attempts:
logger.debug(f"Looping every {interval} seconds for "
f"{max_attempts} attempts")
else:
logger.debug(f"Looping ever... | def while_until_true(interval, max_attempts) | Decorator that executes a function until it returns True.
Executes wrapped function at every number of seconds specified by interval,
until wrapped function either returns True or max_attempts are exhausted,
whichever comes 1st.
The difference between while_until_true and wait_until_true is that the
... | 4.771858 | 4.85943 | 0.981979 |
logger.debug("started")
deprecated(context)
StreamRewriterStep(__name__, 'fileFormat', context).run_step()
logger.debug("done") | def run_step(context) | Parse input file and substitutes {tokens} from context.
Args:
context: pypyr.context.Context. Mandatory.
The following context keys expected:
- fileFormat
- in. mandatory.
str, path-like, or an iterable (list/tuple) of
... | 24.757627 | 23.994978 | 1.031784 |
if 'fileFormatIn' in context:
context.assert_keys_have_values(__name__,
'fileFormatIn',
'fileFormatOut')
context['fileFormat'] = {'in': context['fileFormatIn'],
'out': context['file... | def deprecated(context) | Create new style in params from deprecated. | 8.432031 | 7.86201 | 1.072503 |
logger.debug("started")
format_expression = context.get('nowUtcIn', None)
if format_expression:
formatted_expression = context.get_formatted_string(format_expression)
context['nowUtc'] = datetime.now(
timezone.utc).strftime(formatted_expression)
else:
context['... | def run_step(context) | pypyr step saves current utc datetime to context.
Args:
context: pypyr.context.Context. Mandatory.
The following context key is optional:
- nowUtcIn. str. Datetime formatting expression. For full list
of possible expressions, check here:
... | 4.531832 | 3.208292 | 1.412537 |
logger.debug("started")
assert context, f"context must have value for {__name__}"
deprecated(context)
context.assert_key_has_value('env', __name__)
found_get = env_get(context)
found_set = env_set(context)
found_unset = env_unset(context)
# at least 1 of envGet, envSet or envUnse... | def run_step(context) | Get, set, unset $ENVs.
Context is a dictionary or dictionary-like. context is mandatory.
Input context is:
env:
get: {dict}
set: {dict}
unset: [list]
At least one of env's sub-keys (get, set or unset) must exist.
This step will run whatever combination of ... | 4.982325 | 4.492947 | 1.108921 |
get = context['env'].get('get', None)
exists = False
if get:
logger.debug("start")
for k, v in get.items():
logger.debug(f"setting context {k} to $ENV {v}")
context[k] = os.environ[v]
logger.info(f"saved {len(get)} $ENVs to context.")
exists = ... | def env_get(context) | Get $ENVs into the pypyr context.
Context is a dictionary or dictionary-like. context is mandatory.
context['env']['get'] must exist. It's a dictionary.
Values are the names of the $ENVs to write to the pypyr context.
Keys are the pypyr context item to which to write the $ENV values.
For example,... | 5.199476 | 3.880969 | 1.339737 |
env_set = context['env'].get('set', None)
exists = False
if env_set:
logger.debug("started")
for k, v in env_set.items():
logger.debug(f"setting ${k} to context[{v}]")
os.environ[k] = context.get_formatted_string(v)
logger.info(f"set {len(env_set)} $EN... | def env_set(context) | Set $ENVs to specified string. from the pypyr context.
Args:
context: is dictionary-like. context is mandatory.
context['env']['set'] must exist. It's a dictionary.
Values are strings to write to $ENV.
Keys are the names of the $ENV values to which to writ... | 4.633052 | 3.715173 | 1.247062 |
unset = context['env'].get('unset', None)
exists = False
if unset:
logger.debug("started")
for env_var_name in unset:
logger.debug(f"unsetting ${env_var_name}")
try:
del os.environ[env_var_name]
except KeyError:
# If ... | def env_unset(context) | Unset $ENVs.
Context is a dictionary or dictionary-like. context is mandatory.
context['env']['unset'] must exist. It's a list.
List items are the names of the $ENV values to unset.
For example, say input context is:
key1: value1
key2: value2
key3: value3
env:
... | 6.572134 | 6.376937 | 1.03061 |
env = context.get('env', None)
get_info, set_info, unset_info = context.keys_of_type_exist(
('envGet', dict),
('envSet', dict),
('envUnset', list)
)
found_at_least_one = (get_info.key_in_context or set_info.key_in_context
or unset_info.key_in_cont... | def deprecated(context) | Handle deprecated context input. | 4.065225 | 4.013485 | 1.012891 |
logger.debug("started")
assert context, f"context must have value for {__name__}"
deprecated(context)
context.assert_key_has_value('assert', __name__)
assert_this = context['assert']['this']
is_equals_there = 'equals' in context['assert']
if is_equals_there:
assert_equals = co... | def run_step(context) | Assert that something is True or equal to something else.
Args:
context: dictionary-like pypyr.context.Context. context is mandatory.
Uses the following context keys in context:
- assert
- this. mandatory. Any type. If assert['equals'] not specified,
ev... | 4.467861 | 3.901177 | 1.14526 |
assert_context = context.get('assert', None)
# specifically do "key in dict" to avoid python bool eval thinking
# None/Empty values mean the key isn't there.
if 'assertThis' in context:
assert_this = context['assertThis']
assert_context = context['assert'] = {'this': assert_this}
... | def deprecated(context) | Handle deprecated context input. | 10.374995 | 10.074304 | 1.029847 |
logger.debug("started")
assert context, f"context must have value for {__name__}"
deprecated(context)
found_at_least_one = False
context.assert_key_has_value('tar', __name__)
tar = context['tar']
if 'extract' in tar:
found_at_least_one = True
tar_extract(context)
... | def run_step(context) | Archive and/or extract tars with or without compression.
Args:
context: dictionary-like. Mandatory.
Expects the following context:
tar:
extract:
- in: /path/my.tar
out: /out/path
archive:
- in: /dir/to/archive
... | 5.972263 | 5.397684 | 1.106449 |
format = context['tar'].get('format', None)
if format or format == '':
mode = f"r:{context.get_formatted_string(format)}"
else:
mode = 'r:*'
return mode | def get_file_mode_for_reading(context) | Get file mode for reading from tar['format'].
This should return r:*, r:gz, r:bz2 or r:xz. If user specified something
wacky in tar.Format, that's their business.
In theory r:* will auto-deduce the correct format. | 7.336482 | 5.843259 | 1.255546 |
format = context['tar'].get('format', None)
# slightly weird double-check because falsy format could mean either format
# doesn't exist in input, OR that it exists and is empty. Exists-but-empty
# has special meaning - default to no compression.
if format or format == '':
mode = f"w:{co... | def get_file_mode_for_writing(context) | Get file mode for writing from tar['format'].
This should return w:, w:gz, w:bz2 or w:xz. If user specified something
wacky in tar.Format, that's their business. | 13.996604 | 11.341802 | 1.234072 |
logger.debug("start")
mode = get_file_mode_for_writing(context)
for item in context['tar']['archive']:
# value is the destination tar. Allow string interpolation.
destination = context.get_formatted_string(item['out'])
# key is the source to archive
source = context.ge... | def tar_archive(context) | Archive specified path to a tar archive.
Args:
context: dictionary-like. context is mandatory.
context['tar']['archive'] must exist. It's a dictionary.
keys are the paths to archive.
values are the destination output paths.
Example:
tar:
archive:... | 4.854778 | 4.365475 | 1.112085 |
logger.debug("start")
mode = get_file_mode_for_reading(context)
for item in context['tar']['extract']:
# in is the path to the tar to extract. Allows string interpolation.
source = context.get_formatted_string(item['in'])
# out is the outdir, dhur. Allows string interpolation.... | def tar_extract(context) | Extract all members of tar archive to specified path.
Args:
context: dictionary-like. context is mandatory.
context['tar']['extract'] must exist. It's a dictionary.
keys are the path to the tar to extract.
values are the destination paths.
Example:
tar:
... | 5.186833 | 4.76366 | 1.088834 |
tar = context.get('tar', None)
# at least 1 of tarExtract or tarArchive must exist in context
tar_extract, tar_archive = context.keys_of_type_exist(
('tarExtract', list),
('tarArchive', list))
found_at_least_one = (tar_extract.key_in_context
or tar_archiv... | def deprecated(context) | Handle deprecated context input. | 5.183815 | 5.120823 | 1.012301 |
logger.debug("started")
CmdStep(name=__name__, context=context).run_step(is_shell=True)
logger.debug("done") | def run_step(context) | Run shell command without shell interpolation.
Context is a dictionary or dictionary-like.
Context must contain the following keys:
cmd: <<cmd string>> (command + args to execute.)
OR, as a dict
cmd:
run: str. mandatory. <<cmd string>> command + args to execute.
save: bool. defaul... | 10.794502 | 17.113304 | 0.630767 |
logger.debug("started")
assert context, f"context must have value for {__name__}"
context.assert_key_has_value('envGet', __name__)
# allow a list OR a single getenv dict
if isinstance(context['envGet'], list):
get_items = context['envGet']
else:
get_items = [context['envGe... | def run_step(context) | Get $ENVs, allowing a default if not found.
Set context properties from environment variables, and specify a default
if the environment variable is not found.
This differs from pypyr.steps.env get, which raises an error if attempting
to read an $ENV that doesn't exist.
Args:
context. mand... | 3.8427 | 3.313065 | 1.159862 |
if not isinstance(get_item, dict):
raise ContextError('envGet must contain a list of dicts.')
env = get_item.get('env', None)
if not env:
raise KeyNotInContextError(
'context envGet[env] must exist in context for envGet.')
key = get_item.get('key', None)
if not k... | def get_args(get_item) | Parse env, key, default out of input dict.
Args:
get_item: dict. contains keys env/key/default
Returns:
(env, key, has_default, default) tuple, where
env: str. env var name.
key: str. save env value to this context key.
has_default: bool. True if default spe... | 2.870045 | 1.960776 | 1.463729 |
logger.debug("started")
context.assert_key_has_value(key='pycode', caller=__name__)
logger.debug(f"Executing python string: {context['pycode']}")
locals_dictionary = locals()
exec(context['pycode'], globals(), locals_dictionary)
# It looks like this dance might be unnecessary in python 3.... | def run_step(context) | Executes dynamic python code.
Context is a dictionary or dictionary-like.
Context must contain key 'pycode'
Will exec context['pycode'] as dynamically interpreted python statements.
context is mandatory. When you execute the pipeline, it should look
something like this:
pipeline-runner [na... | 6.834262 | 6.196525 | 1.102919 |
assert context_arg, ("pipeline must be invoked with context arg set. For "
"this yaml parser you're looking for something "
"like: "
"pypyr pipelinename './myyamlfile.yaml'")
logger.debug("starting")
logger.debug(f"attempting to... | def get_parsed_context(context_arg) | Parse input context string and returns context as dictionary. | 6.283481 | 6.321275 | 0.994021 |
parser = argparse.ArgumentParser(
allow_abbrev=True,
description='pypyr pipeline runner')
parser.add_argument('pipeline_name',
help='Name of pipeline to run. It should exist in the '
'./pipelines directory.')
parser.add_argument(dest='pipe... | def get_parser() | Return ArgumentParser for pypyr cli. | 4.110333 | 3.878258 | 1.05984 |
if args is None:
args = sys.argv[1:]
parsed_args = get_args(args)
try:
return pypyr.pipelinerunner.main(
pipeline_name=parsed_args.pipeline_name,
pipeline_context_input=parsed_args.pipeline_context,
working_dir=parsed_args.working_dir,
l... | def main(args=None) | Entry point for pypyr cli.
The setup_py entry_point wraps this in sys.exit already so this effectively
becomes sys.exit(main()).
The __main__ entry point similarly wraps sys.exit(). | 3.358459 | 3.251225 | 1.032982 |
assert is_shell is not None, ("is_shell param must exist for CmdStep.")
# why? If shell is True, it is recommended to pass args as a string
# rather than as a sequence.
if is_shell:
args = self.cmd_text
else:
args = shlex.split(self.cmd_text)
... | def run_step(self, is_shell) | Run a command.
Runs a program or executable. If is_shell is True, executes the command
through the shell.
Args:
is_shell: bool. defaults False. Set to true to execute cmd through
the default shell. | 4.354275 | 4.394919 | 0.990752 |
logger.debug("started")
context.assert_key_has_value(key='contextClear', caller=__name__)
for k in context['contextClear']:
logger.debug(f"removing {k} from context")
# slightly unorthodox pop returning None means you don't get a KeyError
# if key doesn't exist
context.... | def run_step(context) | Remove specified keys from context.
Args:
Context is a dictionary or dictionary-like.
context['contextClear'] must exist. It's a dictionary.
Will iterate context['contextClear'] and remove those keys from
context.
For example, say input context is:
key1: value1
... | 7.402579 | 6.316751 | 1.171897 |
logger.debug("started")
pypyr.steps.cmd.run_step(context)
logger.debug("done") | def run_step(context) | Run command, program or executable.
Context is a dictionary or dictionary-like.
Context must contain the following keys:
cmd: <<cmd string>> (command + args to execute.)
OR, as a dict
cmd:
run: str. mandatory. <<cmd string>> command + args to execute.
save: bool. defaults False. s... | 9.634625 | 10.779205 | 0.893816 |
logger.debug("started")
context.assert_key_has_value(key='defaults', caller=__name__)
context.set_defaults(context['defaults'])
logger.info(f"set {len(context['defaults'])} context item defaults.")
logger.debug("done") | def run_step(context) | Set hierarchy into context with substitutions if it doesn't exist yet.
context is a dictionary or dictionary-like.
context['defaults'] must exist. It's a dictionary.
Will iterate context['defaults'] and add these as new values where
their keys don't already exist. While it's doing so, it will leave
... | 8.865732 | 9.595146 | 0.923981 |
logger.debug("starting")
assert pipeline
assert steps_group
logger.debug(f"retrieving {steps_group} steps from pipeline")
if steps_group in pipeline:
steps = pipeline[steps_group]
if steps is None:
logger.warn(
f"{steps_group}: sequence has no eleme... | def get_pipeline_steps(pipeline, steps_group) | Get the steps attribute of module pipeline.
If there is no steps sequence on the pipeline, return None. Guess you
could theoretically want to run a pipeline with nothing in it. | 4.105674 | 3.856518 | 1.064607 |
logger.debug("starting")
try:
assert pipeline
# if no on_failure exists, it'll do nothing.
run_step_group(pipeline_definition=pipeline,
step_group_name='on_failure',
context=context)
except Exception as exception:
logger.erro... | def run_failure_step_group(pipeline, context) | Run the on_failure step group if it exists.
This function will swallow all errors, to prevent obfuscating the error
condition that got it here to begin with. | 6.383922 | 5.319242 | 1.200156 |
logger.debug("starting")
assert isinstance(
context, dict), "context must be a dictionary, even if empty {}."
if steps is None:
logger.debug("No steps found to execute.")
else:
step_count = 0
for step in steps:
step_instance = Step(step)
ste... | def run_pipeline_steps(steps, context) | Run the run_step(context) method of each step in steps.
Args:
steps: list. Sequence of Steps to execute
context: pypyr.context.Context. The pypyr context. Will mutate. | 3.667073 | 3.678869 | 0.996794 |
logger.debug(f"starting {step_group_name}")
assert step_group_name
steps = get_pipeline_steps(pipeline=pipeline_definition,
steps_group=step_group_name)
run_pipeline_steps(steps=steps, context=context)
logger.debug(f"done {step_group_name}") | def run_step_group(pipeline_definition, step_group_name, context) | Get the specified step group from the pipeline and run its steps. | 3.24513 | 3.198036 | 1.014726 |
os.makedirs(os.path.abspath(os.path.dirname(path)), exist_ok=True) | def ensure_dir(path) | Create all parent directories of path if they don't exist.
Args:
path. Path-like object. Create parent dirs to this path.
Return:
None. | 2.768811 | 4.568908 | 0.606012 |
if isinstance(path, str):
return glob.glob(path, recursive=True)
if isinstance(path, os.PathLike):
# hilariously enough, glob doesn't like path-like. Gotta be str.
return glob.glob(str(path), recursive=True)
elif isinstance(path, (list, tuple)):
# each glob returns a lis... | def get_glob(path) | Process the input path, applying globbing and formatting.
Do note that this will returns files AND directories that match the glob.
No tilde expansion is done, but *, ?, and character ranges expressed with
[] will be correctly matched.
Escape all special characters ('?', '*' and '['). For a literal m... | 3.707174 | 3.890131 | 0.952969 |
return (
path1 and path2
and os.path.isfile(path1) and os.path.isfile(path2)
and os.path.samefile(path1, path2)) | def is_same_file(path1, path2) | Return True if path1 is the same file as path2.
The reason for this dance is that samefile throws if either file doesn't
exist.
Args:
path1: str or path-like.
path2: str or path-like.
Returns:
bool. True if the same file, False if not. | 2.458771 | 3.191062 | 0.770518 |
try:
os.replace(src, dest)
except Exception as ex_replace:
logger.error(f"error moving file {src} to "
f"{dest}. {ex_replace}")
raise | def move_file(src, dest) | Move source file to destination.
Overwrites dest.
Args:
src: str or path-like. source file
dest: str or path-like. destination file
Returns:
None.
Raises:
FileNotFoundError: out path parent doesn't exist.
OSError: if any IO operations go wrong. | 4.096595 | 4.759636 | 0.860695 |
try:
move_file(src, dest)
except Exception:
try:
os.remove(src)
except Exception as ex_clean:
# at this point, something's deeply wrong, so log error.
# raising the original error, though, not this error in the
# error handler, as the ... | def move_temp_file(src, dest) | Move src to dest. Delete src if something goes wrong.
Overwrites dest.
Args:
src: str or path-like. source file
dest: str or path-like. destination file
Returns:
None.
Raises:
FileNotFoundError: out path parent doesn't exist.
OSError: if any IO operations go w... | 9.145359 | 9.182549 | 0.99595 |
if is_same_file(in_path, out_path):
logger.debug(
"in path and out path are the same file. writing to temp "
"file and then replacing in path with the temp file.")
out_path = None
logger.debug(f"opening source file: {in_path}")
wi... | def in_to_out(self, in_path, out_path=None) | Load file into object, formats, writes object to out.
If in_path and out_path point to the same thing it will in-place edit
and overwrite the in path. Even easier, if you do want to edit a file
in place, don't specify out_path, or set it to None.
Args:
in_path: str or path-... | 2.858359 | 2.775903 | 1.029705 |
is_in_place_edit = False
if is_same_file(in_path, out_path):
logger.debug(
"in path and out path are the same file. writing to temp "
"file and then replacing in path with the temp file.")
out_path = None
is_in_place_edit = Tru... | def in_to_out(self, in_path, out_path=None) | Write a single file in to out, running self.formatter on each line.
If in_path and out_path point to the same thing it will in-place edit
and overwrite the in path. Even easier, if you do want to edit a file
in place, don't specify out_path, or set it to None.
Args:
in_path... | 4.650686 | 4.377364 | 1.06244 |
json.dump(payload, file, indent=2, ensure_ascii=False) | def dump(self, file, payload) | Dump json oject to open file output.
Writes json with 2 spaces indentation.
Args:
file: Open file-like object. Must be open for writing.
payload: The Json object to write to file.
Returns:
None. | 3.884516 | 6.811688 | 0.570272 |
logger.debug("started")
deprecated(context)
StreamReplacePairsRewriterStep(__name__, 'fileReplace', context).run_step()
logger.debug("done") | def run_step(context) | Parse input file and replace a search string.
This also does string substitutions from context on the fileReplacePairs.
It does this before it search & replaces the in file.
Be careful of order. If fileReplacePairs is not an ordered collection,
replacements could evaluate in any given order. If this i... | 35.796795 | 27.068504 | 1.322452 |
logger.debug("started")
deprecated(context)
ObjectRewriterStep(__name__, 'fileFormatJson', context).run_step(
JsonRepresenter())
logger.debug("done") | def run_step(context) | Parse input json file and substitute {tokens} from context.
Loads json into memory to do parsing, so be aware of big files.
Args:
context: pypyr.context.Context. Mandatory.
- fileFormatJson
- in. mandatory.
str, path-like, or an iterable (list/... | 32.873722 | 27.793978 | 1.182764 |
logging.basicConfig(
format='%(asctime)s %(levelname)s:%(name)s:%(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=log_level,
handlers=handlers) | def set_logging_config(log_level, handlers) | Set python logging library config.
Run this ONCE at the start of your process. It formats the python logging
module's output.
Defaults logging level to INFO = 20) | 1.747832 | 1.80531 | 0.968161 |
handlers = []
console_handler = logging.StreamHandler()
handlers.append(console_handler)
if log_path:
file_handler = logging.FileHandler(log_path)
handlers.append(file_handler)
set_logging_config(root_log_level, handlers=handlers)
root_logger = logging.getLogger("pypyr")
... | def set_root_logger(root_log_level, log_path=None) | Set the root logger 'pypyr'. Do this before you do anything else.
Run once and only once at initialization. | 2.604779 | 2.517623 | 1.034618 |
if not context_arg:
logger.debug("pipeline invoked without context arg set. For "
"this json parser you're looking for something "
"like: "
"pypyr pipelinename '{\"key1\":\"value1\","
"\"key2\":\"value2\"}'")
re... | def get_parsed_context(context_arg) | Parse input context string and returns context as dictionary. | 9.270139 | 8.714801 | 1.063724 |
logger.debug("starting")
if 'context_parser' in pipeline:
parser_module_name = pipeline['context_parser']
logger.debug(f"context parser found: {parser_module_name}")
parser_module = pypyr.moduleloader.get_module(parser_module_name)
try:
logger.debug(f"running p... | def get_parsed_context(pipeline, context_in_string) | Execute get_parsed_context handler if specified.
Dynamically load the module specified by the context_parser key in pipeline
dict and execute the get_parsed_context function on that module.
Args:
pipeline: dict. Pipeline object.
context_in_string: string. Argument string used to initialize... | 3.502891 | 3.285668 | 1.066112 |
pypyr.log.logger.set_root_logger(log_level, log_path)
logger.debug("starting pypyr")
# pipelines specify steps in python modules that load dynamically.
# make it easy for the operator so that the cwd is automatically included
# without needing to pip install a package 1st.
pypyr.moduleloa... | def main(
pipeline_name,
pipeline_context_input,
working_dir,
log_level,
log_path,
) | Entry point for pypyr pipeline runner.
Call this once per pypyr run. Call me if you want to run a pypyr pipeline
from your own code. This function does some one-off 1st time initialization
before running the actual pipeline.
pipeline_name.yaml should be in the working_dir/pipelines/ directory.
Ar... | 6.728983 | 7.077837 | 0.950712 |
logger.debug("starting")
parsed_context = get_parsed_context(
pipeline=pipeline,
context_in_string=context_in_string)
context.update(parsed_context)
logger.debug("done") | def prepare_context(pipeline, context_in_string, context) | Prepare context for pipeline run.
Args:
pipeline: dict. Dictionary representing the pipeline.
context_in_string: string. Argument string used to initialize context.
context: pypyr.context.Context. Merge any new context generated from
context_in_string into this context inst... | 3.840209 | 4.132874 | 0.929186 |
logger.debug(f"you asked to run pipeline: {pipeline_name}")
if loader:
logger.debug(f"you set the pype loader to: {loader}")
else:
loader = 'pypyr.pypeloaders.fileloader'
logger.debug(f"use default pype loader: {loader}")
logger.debug(f"you set the initial context to: {pipe... | def load_and_run_pipeline(pipeline_name,
pipeline_context_input=None,
working_dir=None,
context=None,
parse_input=True,
loader=None) | Load and run the specified pypyr pipeline.
This function runs the actual pipeline by name. If you are running another
pipeline from within a pipeline, call this, not main(). Do call main()
instead for your 1st pipeline if there are pipelines calling pipelines.
By default pypyr uses file loader. This m... | 3.438695 | 3.248601 | 1.058516 |
logger.debug("starting")
try:
if parse_input:
logger.debug("executing context_parser")
prepare_context(pipeline=pipeline,
context_in_string=pipeline_context_input,
context=context)
else:
logger.debu... | def run_pipeline(pipeline,
context,
pipeline_context_input=None,
parse_input=True) | Run the specified pypyr pipeline.
This function runs the actual pipeline. If you are running another
pipeline from within a pipeline, call this, not main(). Do call main()
instead for your 1st pipeline if there are pipelines calling pipelines.
Pipeline and context should be already loaded.
Args:
... | 4.540237 | 4.384821 | 1.035444 |
logger.debug("started")
context.assert_child_key_has_value('fileWriteYaml', 'path', __name__)
out_path = context.get_formatted_string(context['fileWriteYaml']['path'])
# doing it like this to safeguard against accidentally dumping all context
# with potentially sensitive values in it to disk i... | def run_step(context) | Write payload out to yaml file.
Args:
context: pypyr.context.Context. Mandatory.
The following context keys expected:
- fileWriteYaml
- path. mandatory. path-like. Write output file to
here. Will create directories in path for you.
... | 5.279846 | 4.256027 | 1.240557 |
logger.debug("started")
debug = context.get('debug', None)
if debug:
keys = debug.get('keys', None)
format = debug.get('format', False)
if keys:
logger.debug(f"Writing to output: {keys}")
if isinstance(keys, str):
payload = {keys: conte... | def run_step(context) | Print debug info to console.
context is a dictionary or dictionary-like.
If you use pypyr.steps.debug as a simple step (i.e you do NOT specify the
debug input context), it will just dump the entire context to stdout.
Configure the debug step with the following optional context item:
debug:
... | 3.465273 | 3.021696 | 1.146797 |
logger.debug("started")
assert context, ("context must be set for echo. Did you set "
"'echoMe=text here'?")
context.assert_key_exists('echoMe', __name__)
if isinstance(context['echoMe'], str):
val = context.get_formatted('echoMe')
else:
val = context['ec... | def run_step(context) | Simple echo. Outputs context['echoMe'].
Args:
context: dictionary-like. context is mandatory.
context must contain key 'echoMe'
context['echoMe'] will echo the value to logger.
This logger could well be stdout.
When you execute the pipeline, it should... | 8.071252 | 5.325359 | 1.515626 |
error_type = type(error)
if error_type.__module__ in ['__main__', 'builtins']:
return error_type.__name__
else:
return f'{error_type.__module__}.{error_type.__name__}' | def get_error_name(error) | Return canonical error name as string.
For builtin errors like ValueError or Exception, will return the bare
name, like ValueError or Exception.
For all other exceptions, will return modulename.errorname, such as
arbpackage.mod.myerror
Args:
error: Exception object.
Returns:
... | 2.28425 | 3.079015 | 0.741877 |
logger.debug("starting")
logger.debug(f"loading module {module_abs_import}")
try:
imported_module = importlib.import_module(module_abs_import)
logger.debug("done")
return imported_module
except ModuleNotFoundError as err:
msg = ("The module doesn't exist. Looking for... | def get_module(module_abs_import) | Use importlib to get the module dynamically.
Get instance of the module specified by the module_abs_import.
This means that module_abs_import must be resolvable from this package.
Args:
module_abs_import: string. Absolute name of module to import.
Raises:
PyModuleNotFoundError: if mod... | 4.235834 | 4.244884 | 0.997868 |
logger.debug("starting")
logger.debug(f"adding {working_directory} to sys.paths")
sys.path.append(working_directory)
logger.debug("done") | def set_working_directory(working_directory) | Add working_directory to sys.paths.
This allows dynamic loading of arbitrary python modules in cwd.
Args:
working_directory: string. path to add to sys.paths | 4.413995 | 4.110332 | 1.073878 |
assert parent, ("parent parameter must be specified.")
assert child, ("child parameter must be specified.")
self.assert_key_has_value(parent, caller)
try:
child_exists = child in self[parent]
except TypeError as err:
# This happens if parent isn'... | def assert_child_key_has_value(self, parent, child, caller) | Assert that context contains key that has child which has a value.
Args:
parent: parent key
child: validate this sub-key of parent exists AND isn't None.
caller: string. calling function name - this used to construct
error messages
Raises:
... | 3.239124 | 2.945819 | 1.099566 |
assert key, ("key parameter must be specified.")
self.assert_key_exists(key, caller)
if self[key] is None:
raise KeyInContextHasNoValueError(
f"context['{key}'] must have a value for {caller}.") | def assert_key_has_value(self, key, caller) | Assert that context contains key which also has a value.
Args:
key: validate this key exists in context AND has a value that isn't
None.
caller: string. calling function name - this used to construct
error messages
Raises:
KeyNot... | 6.190369 | 4.621077 | 1.339594 |
assert context_item, ("context_item parameter must be specified.")
if extra_error_text is None or extra_error_text == '':
append_error_text = ''
else:
append_error_text = f' {extra_error_text}'
if not context_item.key_in_context:
raise KeyNo... | def assert_key_type_value(self,
context_item,
caller,
extra_error_text='') | Assert that keys exist of right type and has a value.
Args:
context_item: ContextItemInfo tuple
caller: string. calling function name - this used to construct
error messages
extra_error_text: append to end of error message.
Raises:
... | 2.312004 | 2.122471 | 1.089298 |
assert keys, ("*keys parameter must be specified.")
for key in keys:
self.assert_key_exists(key, caller) | def assert_keys_exist(self, caller, *keys) | Assert that context contains keys.
Args:
keys: validates that these keys exists in context
caller: string. calling function or module name - this used to
construct error messages
Raises:
KeyNotInContextError: When key doesn't exist in context. | 6.129691 | 9.427818 | 0.650171 |
for key in keys:
self.assert_key_has_value(key, caller) | def assert_keys_have_values(self, caller, *keys) | Check that keys list are all in context and all have values.
Args:
*keys: Will check each of these keys in context
caller: string. Calling function name - just used for informational
messages
Raises:
KeyNotInContextError: Key doesn't exist
... | 3.606723 | 5.013564 | 0.719393 |
assert context_items, ("context_items parameter must be specified.")
for context_item in context_items:
self.assert_key_type_value(context_item, caller, extra_error_text) | def assert_keys_type_value(self,
caller,
extra_error_text,
*context_items) | Assert that keys exist of right type and has a value.
Args:
caller: string. calling function name - this used to construct
error messages
extra_error_text: append to end of error message. This can happily
be None or ''.
*... | 3.581882 | 4.631426 | 0.773386 |
val = self[key]
if isinstance(val, str):
try:
return self.get_processed_string(val)
except KeyNotInContextError as err:
# Wrapping the KeyError into a less cryptic error for end-user
# friendliness
raise Ke... | def get_formatted(self, key) | Return formatted value for context[key].
If context[key] is a type string, will just format and return the
string.
If context[key] is a special literal type, like a py string or sic
string, will run the formatting implemented by the custom tag
representer.
If context[key... | 6.504622 | 5.384315 | 1.208069 |
if memo is None:
memo = {}
obj_id = id(obj)
already_done = memo.get(obj_id, None)
if already_done is not None:
return already_done
if isinstance(obj, str):
new = self.get_formatted_string(obj)
elif isinstance(obj, SpecialTagD... | def get_formatted_iterable(self, obj, memo=None) | Recursively loop through obj, formatting as it goes.
Interpolates strings from the context dictionary.
This is not a full on deepcopy, and it's on purpose not a full on
deepcopy. It will handle dict, list, set, tuple for iteration, without
any especial cuteness for other types or types... | 3.407521 | 3.429027 | 0.993728 |
if isinstance(input_string, str):
try:
return self.get_processed_string(input_string)
except KeyNotInContextError as err:
# Wrapping the KeyError into a less cryptic error for end-user
# friendliness
raise KeyNotInC... | def get_formatted_string(self, input_string) | Return formatted value for input_string.
get_formatted gets a context[key] value.
get_formatted_string is for any arbitrary string that is not in the
context.
Only valid if input_string is a type string.
Return a string interpolated from the context dictionary.
If inpu... | 4.871537 | 4.530273 | 1.07533 |
if value is None:
value = default
if isinstance(value, SpecialTagDirective):
result = value.get_value(self)
return types.cast_to_type(result, out_type)
if isinstance(value, str):
result = self.get_formatted_string(value)
resul... | def get_formatted_as_type(self, value, default=None, out_type=str) | Return formatted value for input value, returns as out_type.
Caveat emptor: if out_type is bool and value a string,
return will be True if str is 'True'. It will be False for all other
cases.
Args:
value: the value to format
default: if value is None, set to thi... | 4.648889 | 4.625883 | 1.004973 |
# arguably, this doesn't really belong here, or at least it makes a
# nonsense of the function name. given how py and strings
# look and feel pretty much like strings from user's perspective, and
# given legacy code back when sic strings were in fact just strings,
# keep... | def get_processed_string(self, input_string) | Run token substitution on input_string against context.
You probably don't want to call this directly yourself - rather use
get_formatted, get_formatted_iterable, or get_formatted_string because
these contain more friendly error handling plumbing and context logic.
If you do want to ca... | 11.971835 | 10.715446 | 1.11725 |
# k[0] = key name, k[1] = exists, k2 = expected type
keys_exist = [(key, key in self.keys(), expected_type)
for key, expected_type in keys]
return tuple(ContextItemInfo(
key=k[0],
key_in_context=k[1],
expected_type=k[2],
... | def keys_of_type_exist(self, *keys) | Check if keys exist in context and if types are as expected.
Args:
*keys: *args for keys to check in context.
Each arg is a tuple (str, type)
Returns:
Tuple of namedtuple ContextItemInfo, same order as *keys.
ContextItemInfo(key,
... | 3.870791 | 2.967646 | 1.30433 |
def merge_recurse(current, add_me):
for k, v in add_me.items():
# key supports interpolation
k = self.get_formatted_string(k)
# str not mergable, so it doesn't matter if it exists in dest
if isinstance(v, str):
... | def merge(self, add_me) | Merge add_me into context and applies interpolation.
Bottom-up merge where add_me merges into context. Applies string
interpolation where the type is a string. Where a key exists in
context already, add_me's value will overwrite what's in context
already.
Supports nested hierar... | 4.712182 | 4.444805 | 1.060155 |
def defaults_recurse(current, defaults):
for k, v in defaults.items():
# key supports interpolation
k = self.get_formatted_string(k)
if k in current:
if types.are_all_this_type(Mapping, current[k], v):
... | def set_defaults(self, defaults) | Set defaults in context if keys do not exist already.
Adds the input dict (defaults) into the context, only where keys in
defaults do not already exist in context. Supports nested hierarchies.
Example:
Given a context like this:
key1: value1
key2:
... | 9.196434 | 8.53208 | 1.077865 |
assert rewriter, ("FileRewriter instance required to run "
"FileInRewriterStep.")
rewriter.files_in_to_out(in_path=self.path_in, out_path=self.path_out) | def run_step(self, rewriter) | Do the file in to out rewrite.
Doesn't do anything more crazy than call files_in_to_out on the
rewriter.
Args:
rewriter: pypyr.filesystem.FileRewriter instance. | 8.292324 | 4.771629 | 1.737839 |
assert representer, ("ObjectRepresenter instance required to run "
"ObjectRewriterStep.")
rewriter = ObjectRewriter(self.context.get_formatted_iterable,
representer)
super().run_step(rewriter) | def run_step(self, representer) | Do the object in-out rewrite.
Args:
representer: A pypyr.filesystem.ObjectRepresenter instance. | 10.535251 | 8.014318 | 1.314554 |
rewriter = StreamRewriter(self.context.iter_formatted_strings)
super().run_step(rewriter) | def run_step(self) | Do the file in-out rewrite. | 20.429811 | 13.908292 | 1.468894 |
formatted_replacements = self.context.get_formatted_iterable(
self.replace_pairs)
iter = StreamReplacePairsRewriterStep.iter_replace_strings(
formatted_replacements)
rewriter = StreamRewriter(iter)
super().run_step(rewriter) | def run_step(self) | Write in to out, replacing strings per the replace_pairs. | 11.682675 | 8.242608 | 1.417352 |
def function_iter_replace_strings(iterable_strings):
for string in iterable_strings:
yield reduce((lambda s, kv: s.replace(*kv)),
replacements.items(),
string)
return function_iter_replace_string... | def iter_replace_strings(replacements) | Create a function that uses replacement pairs to process a string.
The returned function takes an iterator and yields on each processed
line.
Args:
replacements: Dict containing 'find_string': 'replace_string' pairs
Returns:
function with signature: iterator of... | 4.760939 | 5.336993 | 0.892064 |
logger.debug("started")
context.assert_key_has_value(key='contextSetf', caller=__name__)
for k, v in context['contextSetf'].items():
logger.debug(f"setting context {k} to value from context {v}")
context[context.get_formatted_iterable(
k)] = context.get_formatted_iterable(v... | def run_step(context) | Set new context keys from formatting expressions with substitutions.
Context is a dictionary or dictionary-like.
context['contextSetf'] must exist. It's a dictionary.
Will iterate context['contextSetf'] and save the values as new keys to the
context.
For example, say input context is:
key1... | 5.942015 | 5.050726 | 1.176468 |
in_type = type(obj)
if out_type is in_type:
# no need to cast.
return obj
else:
return out_type(obj) | def cast_to_type(obj, out_type) | Cast obj to out_type if it's not out_type already.
If the obj happens to be out_type already, it just returns obj as is.
Args:
obj: input object
out_type: type.
Returns:
obj cast to out_type. Usual python conversion / casting rules apply. | 3.559295 | 4.08812 | 0.870644 |
tag_representers = [PyString, SicString]
yaml_loader = get_yaml_parser_safe()
for representer in tag_representers:
yaml_loader.register_class(representer)
pipeline_definition = yaml_loader.load(file)
return pipeline_definition | def get_pipeline_yaml(file) | Return pipeline yaml from open file object.
Use specific custom representers to model the custom pypyr pipeline yaml
format, to load in special literal types like py and sic strings.
If looking to extend the pypyr pipeline syntax with special types, add
these to the tag_representers list.
Args:
... | 7.524 | 5.018706 | 1.499191 |
yaml_writer = yamler.YAML(typ='rt', pure=True)
# if this isn't here the yaml doesn't format nicely indented for humans
yaml_writer.indent(mapping=2, sequence=4, offset=2)
return yaml_writer | def get_yaml_parser_roundtrip() | Create the yaml parser object with this factory method.
The round-trip parser preserves:
- comments
- block style and key ordering are kept, so you can diff the round-tripped
source
- flow style sequences ( ‘a: b, c, d’) (based on request and test by
Anthony Sottile)
- anchor names that... | 7.720961 | 8.909068 | 0.866641 |
yaml_writer = get_yaml_parser_roundtrip()
# Context is a dict data structure, so can just use a dict representer
yaml_writer.Representer.add_representer(
Context,
yamler.representer.RoundTripRepresenter.represent_dict)
return yaml_writer | def get_yaml_parser_roundtrip_for_context() | Create a yaml parser that can serialize the pypyr Context.
Create yaml parser with get_yaml_parser_roundtrip, adding Context.
This allows the yaml parser to serialize the pypyr Context. | 5.172465 | 4.583835 | 1.128414 |
if not context_arg:
logger.debug("pipeline invoked without context arg set. For "
"this keyvaluepairs parser you're looking for "
"something like: "
"pypyr pipelinename 'key1=value1,key2=value2'.")
return None
logger.debug("sta... | def get_parsed_context(context_arg) | Parse input context string and returns context as dictionary. | 10.348705 | 9.723952 | 1.064249 |
logger.debug("started")
deprecated(context)
context.assert_key_has_value(key='fetchJson', caller=__name__)
fetch_json_input = context.get_formatted('fetchJson')
if isinstance(fetch_json_input, str):
file_path = fetch_json_input
destination_key_expression = None
else:
... | def run_step(context) | Load a json file into the pypyr context.
json parsed from the file will be merged into the pypyr context. This will
overwrite existing values if the same keys are already in there.
I.e if file json has {'eggs' : 'boiled'} and context {'eggs': 'fried'}
already exists, returned context['eggs'] will be 'b... | 4.013794 | 3.426715 | 1.171324 |
if 'fetchJsonPath' in context:
context.assert_key_has_value(key='fetchJsonPath', caller=__name__)
context['fetchJson'] = {'path': context['fetchJsonPath']}
if 'fetchJsonKey' in context:
context['fetchJson']['key'] = context.get('fetchJsonKey', None)
logger.warning... | def deprecated(context) | Create new style in params from deprecated. | 7.90994 | 7.151928 | 1.105987 |
return any([
re.match(pattern, path) for pattern in QC_SETTINGS['IGNORE_REQUEST_PATTERNS']
]) | def _ignore_request(self, path) | Check to see if we should ignore the request. | 8.322208 | 6.911558 | 1.2041 |
return any([
re.search(pattern, query.get('sql')) for pattern in QC_SETTINGS['IGNORE_SQL_PATTERNS']
]) | def _ignore_sql(self, query) | Check to see if we should ignore the sql query. | 8.371638 | 6.903374 | 1.212688 |
if QC_SETTINGS['DISPLAY_DUPLICATES']:
for query, count in self.queries.most_common(QC_SETTINGS['DISPLAY_DUPLICATES']):
lines = ['\nRepeated {0} times.'.format(count)]
lines += wrap(query)
lines = "\n".join(lines) + "\n"
output ... | def _duplicate_queries(self, output) | Appends the most common duplicate queries to the given output. | 6.082681 | 5.185816 | 1.172946 |
request_totals = self._totals("request")
response_totals = self._totals("response")
return request_totals[2] + response_totals[2] | def _calculate_num_queries(self) | Calculate the total number of request and response queries.
Used for count header and count table. | 6.092392 | 3.737286 | 1.630165 |
# If we are in this method due to a signal, only reload for our settings
setting_name = kwargs.get('setting', None)
if setting_name is not None and setting_name != 'QUERYCOUNT':
return
# Support the old-style settings
if getattr(settings, 'QUERYCOUNT_THRESHOLDS', False):
QC_S... | def _process_settings(**kwargs) | Apply user supplied settings. | 4.304855 | 4.237262 | 1.015952 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.