title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
Shouldn't LightningDataModule inherit abc.ABC to have @abstractmethod decorator works properly ?
[ "feature", "data handling" ]
pytorch-lightning/pytorch_lightning/core/datamodule.py Line 89 in a55c481 class LightningDataModule(object, metaclass=_DataModuleWrapper): # pragma: no cover To ensure all abstract methods are overridden, one should inhe...
When is `on_validation_epoch_start` / `on_validation_epoch_end` being called?
[ "question" ]
❓ Questions and Help What is your question? When is on_validation_epoch_start / on_validation_epoch_end being called? It there a doc that explains the order of the callback function being called? I need a callback function that will be called at the end of every validation epoch. I read the callback docs and I think o...
Issue with resume_from_checkpoint (on CPU and GPU)
[ "bug", "help wanted" ]
πŸ› Bug Hi, I've upgraded recently pytorch-lightning from 0.7.5 to 0.8.5, and I have encountered an issue with the resume_from_checkpoint from the Trainer class. To Reproduce The dummy example below shows the behaviour: Run the script for a few loops in order to create a first checkpoint. Stop. Re-run the code, it shou...
Resume training with resetting / increasing max number of epochs
[ "feature", "help wanted", "won't fix" ]
Hi! I would like to know how can one continue training from existing checkpoint if after resuming you got saved learning rate, current epoch and other significant info which interrupts training immediately. Let's say I train classifier using ReduceLROnPlateau and saving best epoch via ModelCheckpoint callback. I set ma...
Disabling automatic .train() for loss criteria
[ "question", "won't fix" ]
❓ Questions and Help What is your question? I have a loss module that is loaded as part of my lightning module with its own inner network. (output is passed through the network and the result is used to compute the loss) The problem is that when starting a train_step Lightning automatically changes the entire module to...
Fix DDP logging
[ "bug", "priority: 0", "distributed" ]
Add a global_zero_only=true flag, if false- create individual files, prefixed with machine nun Write a logging callback that will do map reduce Can we do this in the metrics? Aggregate all tensors on global zero first (might run into memory issues) Gather each output individually in CPU memory We want to preserve the f...
Trainer.on_gpu incorrectly set to False when specifying `gpus=0`
[ "help wanted", "docs" ]
πŸ› Bug When creating a trainer with the arg gpus=0, the field on_gpu is always set False, even on machines with CUDA available. The existing logic for on_gpu is: self.on_gpu = True if (gpus and torch.cuda.is_available()) else False is buggy because 0 is "falsy". It should probably be: self.on_gpu = gpus is not None an...
Support Slash Seperator for TrainResult / EvalResult for TensorBoard
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature TensorBoard supports and recommends grouping tags prefixed with slashes. Tags are group under the same drop down with other tags that share the prefix. For example: step step/train_loss step/val_loss epoch epoch/train_loss epoch/val_loss Currently, both TrainResult and EvalResult prefix the results with an...
Cannot pickle custom metric with DDP mode.
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: just run this script. Code sample import torch import torch.nn.functional as F import pytorch_lightning as pl from pytorch_lightning.metrics import Metric, TensorMetric class MetricPerplexity(Metric): """ Computes the perplexity of the model. """ ...
Support for Scheduler taking a value to the step function
[ "feature", "won't fix", "discussion" ]
Looking to the code in training_loop.py it seems like the only scheduler that can take values to the step function is ReduceLROnPlateau, however, there is CosineAnnealingWarmRestarts scheduler and custom schedulers that can take epoch/step id or any other value to the step function. pytorch-lightning/pyt...
Failing docker-Conda build
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug there seems to be some connection issue while creating Conda env To Reproduce https://github.com/PyTorchLightning/pytorch-lightning/runs/957741187 Additional context
Some questions about checkpoints and learning rate
[ "question" ]
How to pass learning rate to progression bar and how to choose metric for saving model weights? Thanks!
Throw warning for using monitor in trainer
[ "bug", "feature", "help wanted", "let's do it!" ]
In the init method for monitor checkpoint, we should throw a warning for calling monitor key inside the trainer.
The total number of batches shows by the progress bar of the sanity check is wrong
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug The total of the sanity check progress bar is set by pytorch-lightning/pytorch_lightning/callbacks/progress.py Line 296 in 4d0406e self.val_progress_bar.total = convert_inf(trainer.num_sanity_val_steps * len(trainer.val...
unexpected keyword argument 'amp_type' in trainer __init__()
[ "bug", "help wanted" ]
πŸ› Bug Versions used: Pytorch: 1.6.0 Pytorch Lightning: 0.9.12rc. trainer = Trainer(amp_type='apex', ...) Error message: __init__() got an unexpected keyword argument 'amp_type' To Reproduce Init trainer as shown above.
Optimizer initialization with DDP
[ "feature", "discussion" ]
❓ Questions and Help What is your question? I would have expected optimizers to always be initialized after parameters have been moved to their destination device. However, some ddp backends such as ddp_backend, ddp_spawn_backend, ddp2_backend initialize the optimizer with the CPU parameters befo...
Batchsize and learning rate scheduler
[ "question" ]
I was wondering if I need to adjust the batch size when using TPUs. I had a memory error when trying to run a (image 256x256x3) batch of size 128, which works perfectly fine on GPUs. Furthermore, do I need to adjust my custom learning rate scheduler which on GPUs run every batch (not just epoch): def configure_optimize...
How to use multiple metric monitors in ModelCheckpoint callback?
[ "question", "discussion", "design" ]
❓ Questions and Help What is your question? How can I use multiple metric monitors in the ModelCheckpoint? In another way, how can I use multiple ModelCheckpoint callbacks?It seems that the Trainer only accepts a singleModelCheckpoint in the checkpoint_callback argument. Code site-packages/pytorch_lightning/trainer/cal...
load_from_checkpoint: TypeError: __init__() missing 1 required positional argument
[ "bug", "question" ]
❓ Questions and Help What is your question? load_from_checkpoint: TypeError: init() missing 1 required positional argument I have read the issues before, but the things different is my LightningModule is inherited from my self-defined LightningModule. How to solve this problem or what is the best practice better suited...
Checkpoint monitor str in default Trainer
[ "question", "won't fix" ]
❓ Questions and Help Hi, In the default setting (with checkpoint_callback=True), when the method configure_checkpoint_callback is invoked by the c'ntr, the model hasn't yet been loaded to the trainer (as run_pretrain_routine is invoked only after fit()). Thus, when calling self.configure_checkpoint_callback(True), the ...
DistributedDataParallel with nccl backend produces zombie processes
[ "bug" ]
Hey lightning community, first I want to thank you for this nice project. It helped me a lot to improve my research code and I'm happy to recommend it to my colleagues whenever they are complaining about their code mess. I have a problem with some kind of racing condition which is reproducible in a slurm environment wi...
ModelCheckpoint with custom filepath don't support training on multiple nodes
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug When training on multiple nodes using ModelCheckpoint with custom filepath, it will raise FileExistsError caused by the following line of code: model_checkpoint.py#L127. Maybe a try-except block is needed?
RNN batch_first performance considerations based on backing kernel
[ "question", "won't fix" ]
Question: Can I use batch_first=True everywhere without worrying about performance differences on CPU, GPU, TPU? The defaults PyTorch sets are batch_first=False for all RNNs (RNN, LSTM, GRU). Pytorch Lightning mandates batch_first=True for truncated_bptt_steps, however. Will setting it True this mean relative faster pe...
is limit_train_batches shuffle or random
[ "question" ]
hi, I am using limit_train_batches . If it is set, is it means a subdataset of whole train dataset ? similar with torch.utils.data.random_split
warm up LR causes crash
[ "bug", "help wanted" ]
My resnet encoder and transformer decoder are not training well. So trying all kinds of stuff. Latest attempt to improve is to use a warmup learning rate as described here: https://github.com/PyTorchLightning/pytorch-lightning/blob/master/docs/source/optimizers.rst My code is an exact copy: def optimizer_step(self...
Trainer "optimizers" attribute is None when saving checkpoint and callbacks list is not empty
[ "bug", "help wanted", "waiting on author", "checkpointing" ]
πŸ› Bug I'm training a GAN and I'm running a few custom callbacks as well. When the model attempts to save at the end of the first epoch, it crashes. Here's the very strange thing: I have the exact same code in a Jupyter notebook and the error doesn't occur. To Reproduce Steps to reproduce the behavior: The bug does not...
mlflow checkpoints in the wrong location
[ "bug", "help wanted" ]
I'm not sure if I'm doing something wrong, I'm using mlflow instead of tensorboard as a logger. I've used the defaults i.e. mlflow = loggers.MLFlowLogger() trainer = pl.Trainer.from_argparse_args(args, logger=mlflow) I'm ending up with the following folder structure \mlflow \mlflow\1 \mlflow\1\{guid}\artifacts \mlflow...
Custom Checkpoint callback for multiple models
[ "question" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I am looking to write my own callback for checkpointing for a list of models I initialize in init(). Code I created 10 timeseries models and 1 image model lets say. Each model inherits Lightningmodule. So LITFusionExp has ...
Issue with pl.Trainer.from_argparse_args(...)
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Use parser = pl.Trainer.add_argparse_args(parser) Run python main.py --overfit_batches 1 The training runs over the whole dataset instead of running on a single batch Code sample Expected behavior Only one batch should have run. Environment CUDA: GPU: Tesla...
Using IterableDatasets without __len__ for Training
[ "bug", "help wanted" ]
Calling fit(model, trainloader, evalloader) internally calls enforce_datamodule_dataloader_override. This function has the if statement if (train_dataloader or val_dataloaders) and datamodule:. pytorch-lightning/pytorch_lightning/trainer/configuration_validator.py Line 13 in ...
Adaptive Gradient Clipping
[ "feature", "help wanted" ]
πŸš€ Feature See code here: https://github.com/pseeth/autoclip Motivation a simple method for automatically and adaptively choosing a gradient clipping threshold, based on the history of gradient norms observed during training. Experimental results show that applying AutoClip results in improved generalization performanc...
TensorBoardLogger not saving hparams without metrics
[ "bug", "help wanted" ]
πŸ› Bug log_hyperparams for TensorBoardLogger saves no data with default metrics=None, only hparam entries/names show up in sidebar To Reproduce Steps to reproduce the behavior: import pytorch_lightning as pl logger = pl.loggers.TensorBoardLogger("./test_logs") test_dict = {"test":0} logger.log_hyperparams(test_dict) ...
Partially overwrite parameters when using load_from_checkpoint
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature I noticed that PL now support arbitary nested dictionary as hparams, I'm really happy with that. But now I found a small problem when using load_from_checkpoint. This function accepts kwargs to override hparams from checkpoint, although it uses dict.update, original update function from python cannot handle ...
AttributeError: module 'pytorch_lightning' has no attribute 'TrainResult'
[ "bug", "help wanted" ]
πŸ› Bug Following https://pytorch-lightning.readthedocs.io/en/latest/new-project.html I tried to log loss values to Tensorboard: def training_step(self, batch, batch_idx): loss = ... result = pl.TrainResult(minimize=loss) result.log('train_loss', loss) return result It seems module "TrainResult" is not present in pytor...
CrossEntropyLoss with weights
[ "question" ]
I need weights in CrossEntropyLoss (actually multiple, but the same issue). The documentation talks about tensors copied from other tensors, but there is no tensor to copy from in the init. So I'm stuck. To make the weights unquestionably simple, I use ones. class JJG_Transformer(pl.LightningModule): def __init_...
Gradient Clipping for discriminator only
[ "feature", "help wanted" ]
How can I clip the weights for only discriminator while training GAN? Thanks
valdation_epoch_end won't log if no logging is done in validation_step
[ "bug", "help wanted", "checkpointing" ]
πŸ› Bug @edenlightning looks like setting both logger=False and prog_bar=False won't do anything. If this is intended, maybe we should add a warning or something. Also saw another issue, if I don't log anything in validation_step then logged values in validation_epoch_end won't be logged too even if we set logger=True. ...
ModelCheckpoint does not create full path
[ "bug", "help wanted", "priority: 0", "checkpointing" ]
πŸ› Bug To Reproduce Run checkpoint_callback = ModelCheckpoint('my/path/') Only my folder is created. I think this line discard the last trailing slash. So the directories are not created as intended when the paths are getting split. Expected behavior Path should be fully created.
Validation step isn't being ran
[ "question" ]
❓ Questions and Help What is your question? I have been trying to get the trainer to call the validation_step function but it doesn't seem to ever get called. I assume I am missing something obvious but have looking at the tutorials and docs I haven't been able to find the obvious. The code for the model and trainer ...
add a self.lr to trainer
[ "feature", "help wanted" ]
πŸš€ Feature Motivation Pitch Alternatives Additional context
calling Trainer.test(model) will perform test twice
[ "bug", "help wanted" ]
πŸ› Bug When calling Trainer.test(model) after training separately, the Test routine will be called twice. To Reproduce Steps to reproduce the behavior: model = my_lightning_model(some_hparams) trainer = Trainer() trainer.fit(model) trainer.test(model) #the test routine will be performed twice Expected behavior Th...
TypeError: validation_step() takes 3 positional arguments but 4 were given
[ "bug", "help wanted" ]
πŸ› Bug When running my model I get the error message: TypeError: validation_step() takes 3 positional arguments but 4 were given Stacktrace: line 106, in <module> trainer.fit(model) line 707, in fit self.run_pretrain_routine(model) line 812, in run_pretrain_routine self.evaluate(model, self.get_val_dataloaders(),self....
I can't find callbacks on pl.Trainer()
[ "question" ]
I saw in Documentation that show ' argument', but when a run I got a error The same for 'num_tpu_core's argument TypeError Traceback (most recent call last) <ipython-input-23-62ad7a703918> in <module>() 2 3 model = LightningBirdsClassifier(num_classes=200) ----> 4 trainer = ...
The difference between Module and Trainer load from checkpoint?
[ "docs" ]
LightningModule has a function load_from_checkpoint, while the trainer also has a variable, namely resume_from_checkpoint, what's the difference between them? By the way, I want to print the best result of the checkpoint whenever I resume from the ckpt, how can I do this job? I override the on_load_checkpoint of Lightn...
Error: object has no attribute 'num_train_imgs' in version master, but not 0.6.0
[ "bug", "help wanted" ]
I am running on Ubuntu 18, and trying to run the vq_vae.yaml config from this package: https://github.com/AntixK/PyTorch-VAE It runs fine under version 0.6.0 with python 3.7 but when I update to lighting-master (0.6.1) I get this error: AttributeError: 'VAEXperiment' object has no attribute 'num_train_imgs' The releva...
Why is there no training_epoch_end?
[ "feature", "help wanted", "let's do it!" ]
πŸš€ Feature If i want to calculate and log average statistics for the training epoch, it seems like there is no option to define a "training_epoch_end" in the LightningModule, as there is validation_epoch_end and test_epoch_end. Motivation Seems very intuitive to have this function. I know the on_epoch_end hook exists, ...
fast_dev_run -> unit_test
[ "feature", "discussion" ]
Anyone want to make this change? rename fast_dev_run -> unit_test add checking the test set as well (currently only checks val, train).
[distributed] set_nvidia_flags doesn't affect dp, does affect ddp
[ "bug", "help wanted", "good first issue", "priority: 0" ]
πŸ› Bug When default CUDA GPU order differs from PCI_BUS_ID order the user won't use the same GPUs in DP and DDP modes. Trainer(gpus='0,1,2', distributed_backend='dp') uses gpus with PCI_BUS_ID's: '0,1,3' (on my machine) whereas Trainer(gpus='0,1,2', distributed_backend='ddp') uses gpus with PCI_BUS_ID's: '0,1,2' It see...
Update CHANGELOG for 0.7.x
[ "help wanted" ]
πŸ› Bug Updated CHANGELOG according to the reset changes (about last two weeks) especially deprecated items like data_loader or xxxxx_end Additional context https://github.com/PyTorchLightning/pytorch-lightning/milestone/4
Training on TPU stuck at "Waiting to connect to client mesh master (300 seconds) localhost:54541"
[ "bug", "help wanted" ]
πŸ› Bug I am training GPT2 model on TPU but training is getting stuck with following as the last line: tensorflow/compiler/xla/xla_client/mesh_service.cc:208] Waiting to connect to client mesh master (300 seconds) localhost:54541 To Reproduce I have followed all steps as outlined in https://github.com/mgrankin/ru_transf...
Github 0.7.1 release
[ "docs" ]
If I do install pip install pytorch-lightning I get version 0.7.1 however, there is no official 0.7.1 release on Github. Is it intentional? Note: that there is 0.7.1 git tag
Callback derived class is called without module argument
[ "bug", "help wanted" ]
πŸ› Bug The class Callback(abc.ABC) API expects trainer and sometimes pl_module to be supplied. E.g.: def on_init_start(self, trainer): pass See the definition However, the caller TrainerCallbackHookMixin calls the callback methods without the module argument. E.g.: for callback in self.callbacks: callback.on_...
Issue running on Colab TPUs
[ "bug", "help wanted" ]
I am trying to train my model using Colab TPUs, but I am getting the following error and am a bit baffled. Any help or guidance would be greatly appreciated. /usr/local/lib/python3.6/dist-packages/pytorch_lightning/core/decorators.py:13: UserWarning: data_loader decorator deprecated in 0.7.0. Will remove 0.9.0 warnin...
Advices on some cases hard to remove cuda() call.
[ "question" ]
❓ Questions and Help What is your question? I understand that it's beneficial to remove .cuda() or .to() calls to make the code flexible. But I experience in some cases it's hard to know on what device my tensor is. In the following code, my batch is raw strings, so the batch is passed as pure python list not tensor. I...
TensorBoardLogger should be able to add metric names in hparams
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature TensorBoard allows investigating the effect of hyperparameters in the hparams tab. Unfortunately, the log_hyperparams function in TensorBoardLogger cannot add any information about which of the logged metrics is actually a "metric" which can be used for such a comparison. Motivation I would like to use the b...
Enable artifact logging for mlflow logger
[ "feature", "help wanted", "logger" ]
πŸš€ Feature mlflow provides the ability to log artifacts (a local file or directory). However, the MLFlowLogger class does not have the wrapper method for this functionality. Motivation I always use the log_artifacts method of mlflow to log things like the last weights file for the run, confusion matrix image for the la...
Wandb logger doesn't upload saved model checkpoint for final epoch
[ "bug", "help wanted", "logger" ]
πŸ› Bug When training a model on the TPU and using the wandb logger, the checkpoint for the last epoch trained doesn't get uploaded to wandb. To Reproduce Colab notebook: https://colab.research.google.com/drive/1oPaRWGZcz6YEol012xFADN42LV-jowtT
Hyperparameter Search
[ "question" ]
Hi, It looks like the hyperparameters search example is broken (https://github.com/optuna/optuna/blob/master/examples/pytorch_lightning_simple.py). Since this is such a common task, is there any example that documents how to properly integrate with a library like optuna or ray tune? Thanks!
configure_optimizers with OneCycleLR and Pretrain Freeze/Unfreeze
[ "question", "won't fix" ]
Hello. Thanks for the work on this framework - it's something I've been looking for and I am currently working on transition all my own work from fast.ai to pytorch-lightining. I'm currently stuck on the configure_optimizers step. For those not familiar, the core workflow of fast.ai goes something like this: #create mo...
Do not have combined train+val progress bar, keep bar output after epoch is finished
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature The progress bar labeled "Epoch" should be renamed to "Train" and the validation data should be displayed only in a separate bar. Additionally, each epoch should leave the final training and validation bar on-screen for visual inspection. Motivation It's confusing that training and validation are shown in a ...
Better message when DataLoader is wrong
[ "bug", "let's do it!" ]
On the verge between bug and improvement. There was a bug in my Validation DataLoader and was returning irrelevant staff. Accidentally the length was 0. Probably an edge case combination. The error I was getting during the validation sanity check was quite cryptic: Traceback (most recent call last): File "UNet_WaveP...
Colab TPU error
[ "question", "accelerator: tpu" ]
I'm trying to run a LSTM model on TPU with colab. It throws me following error. Exception in device=TPU:1: Aborted: Session 0275bc9f6430801b is not found. Exception in device=TPU:3: Aborted: Session 780bc43376b5f650 is not found. Traceback (most recent call last): Traceback (most recent call last): File "/usr/local/l...
Change the way the configure_optimizers() returns
[ "feature", "help wanted", "won't fix", "discussion" ]
πŸš€ Feature Force the method LightningModule.configure_optimizers() to return two lists. Motivation Right now you offer flexibility in return one of the following: - Single optimizer - List or Tuple - List of optimizers - Two lists - The first list has multiple optimizers, the second a list of LR schedulers But, that ca...
IterableDataset Issue, OverflowError: cannot convert float infinity to integer
[ "bug", "help wanted" ]
Hey all, I am very new to ML and PyTorch and PyTorch Lightning, so if this is a simple problem sorry to bother. However, I am struggling to switch from PyTorch to PyTorch Lightning. The PyTorch code runs with no error on Google Colab hence I think that the structure is fine. Now I am trying to implement Lightning follo...
Race condition and repeated os.remove in load_spawn_weights
[ "question", "won't fix" ]
In multi-gpu, multi-node training, after training is finished, the following error occurs: 0: File ".../estevens/pytorch/lightning_resnet50.py", line 126, in <module> 0: ResnetLightningExample.init_from_cli(sys.argv[1:]).main() 0: File "...//lightning_utils/lightning_utils.py", line 215, in main 0: trainer....
Add support for hierarchical dict
[ "feature", "let's do it!" ]
πŸš€ Feature Motivation Since v0.7.0, LightningModule accepts dict hparams, however, still TensorBoardLogger raises an error with hierarchical dict. Considering the compatibility of the other package, especially Hydra #807, hierarchical dict should be accepted by any loggers. Pitch Flatten hierarchical dict before hpara...
epoch_end Logs Default to Steps Instead of Epochs
[ "bug", "help wanted" ]
πŸ› Bug Logs generated within validation_epoch_end have their iteration set to the number of steps instead of number of epochs. To Reproduce Steps to reproduce the behavior: Create LightningModile with the below functions. Train the module for 1 or more epochs Run Tensorboard Note the iteration for logs generated in va...
Question about return value of `validation_epoch_end`
[ "question", "won't fix" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I'm a bit confused about what to return from methods like validation_epoch_end and what to put inside its log member. Based on the document the log member of the return value of validation_epoch_end mainly for logging and ...
No validation checks when overfit_pct is set
[ "bug", "help wanted" ]
πŸ› Bug When setting the overfit_pct to any value between 0 and 1 (exclusive) in trainer, the validation checks are disabled. To Reproduce I have worked on a minimal example to reproduce the bug: import pytorch_lightning as pl import torch class Dataset(torch.utils.data.Dataset): def __init__(self, input_dim, outp...
Learning Rate Schedulers' default dictionary parameters should be set via the Trainer
[ "feature", "help wanted" ]
πŸš€ Feature The default Learning Rate Schedulers (LRS) dictionary parameters should be settable from the Trainer constructor. Motivation The documentation doesn't seem to be clear that the LRS have the following additional parameters available to be set when you configure the optimizers: 'interval': 'epoch', # defa...
multi-gpu ddp calls validation and testing loops too many times
[ "bug", "help wanted" ]
When using ddp with multiple gpus, each validation and test loop is called with the entire validation dataset for each gpu. Expected behavior is that the dataset is divided appropriately across the gpus. I am using current master (cloned Mar 14), Ubuntu 19.10, Cuda 10.1, python 3.7.5, pytorch 1.4, venv environment. The...
No Callbacks for Validation Batch Step - How To Get Progress of Validation?
[ "feature", "help wanted" ]
πŸš€ Feature The Callbacks has two functions on_batch_start and on_batch_end. The documentation and code execution shows that these are only called for training batches, not validaton (or test). I am building a Callback for my own logging/dashboarding via Streamlit and I have a requirement to track the progress of both t...
training_epoch needs to return a "loss" key in the dict
[ "docs" ]
πŸ“š Documentation Hi everyone! In the docs detailing the usage of the logging functions training_epoch_end, if "loss": loss is not explicitly passed as a return value, then the code will fail. The docs at https://pytorch-lightning.readthedocs.io/en/latest/experiment_reporting.html#log-metrics is not correct, def trainin...
Additional dataloader created and discarded when training with reload_dataloaders_every_epoch
[ "bug", "help wanted" ]
πŸ› Bug I am training with reload_dataloaders_every_epoch and I've noticed it instantiates an extra DataLoader before training for which nothing is run. This is an issue for me as I am training with chunks that get loaded every epoch and it is messing with the order I load them in especially if I reload a checkpoint; it...
Restarts ignores `val_check_interval`
[ "bug", "help wanted", "won't fix" ]
πŸ› Bug With val_check_interval != 1, checkpoints are saved in the middle of the epoch, but that location is not saved in the checkpoint, and after a restart, it always begins from the beginning of the last epoch. To reproduce: Run any training with val_check_interval=0.2 Kill training after, say, 40% of an epoch Resta...
neptune.ai logger console error: X-coordinates must be strictly increasing
[ "bug", "help wanted", "logger" ]
πŸ› Bug When using the neptune.ai logger, epochs automatically get logged, even though I never explicitly told it to do so. Also, I get an error, yet everything seems to get logged correctly (apart from the epochs, which also get logged every training step and not every epoch): WARNING:neptune.internal.channels.channel...
hparams need to allow None values
[ "help wanted", "question" ]
πŸ› Bug I can't set hparams.gpus to None: I0318 11:12:45.466972 15554 lightning_utils.py:182] <class '__main__.ResnetLightningExample'> hparams: Namespace(amp_level='O2', backend='', batch_size=16, debug_print_env=False, debug_skip_loaded_hparams_check=False, do_test=False, early_stop_metric='val_loss', early_stop_mode=...
Refactor fit/train/run_pretrain_routine/evaluate/test
[ "discussion" ]
Right now, a lot of model setup happens in fit and run_pretrain routine. However, that model setup needs to happen before we run evaluate, test, etc, so we end up calling fit even when we want to do testing. This makes the code hard to follow. We should refactor model setup out into its own method that fit, train, test...
Early stopping not working on 0.7.1
[ "bug", "help wanted" ]
πŸ› Bug Early stopping does not work anymore. When I downgrade from 0.7.1 or the current dev version to 0.6.0 early stopping works again, with the same code. Code sample def main(hparams): if hparams.early_stopping == 'yes': early_stopping = EarlyStopping( monitor='batch/mean_absolute_loss', ...
Logging the learning rate
[ "feature", "help wanted", "discussion" ]
Hey, I think it would a cool feature to add a flag enabling the logging of the learning rate(s). Thanks for your amazing work !
Allow custom scatter function in data parallel
[ "help wanted", "question", "won't fix", "strategy: dp" ]
πŸš€ Feature Allow custom scatter function to be passed in data parallel module. Motivation Is there a way to customize scattering process in data parallel? My use case is that I have sparse tensors represented in COO format and they cannot be stored in a single tensor, but require a list to store. In this way, the built...
update docs to recommend __call__ for forward passes
[ "docs" ]
πŸ“š Documentation We should update the docs to recommend usage of self(x) for calculating the forward pass rather than self.forward(x). Calling forward() directly can cause issues when you're using PyTorch model hooks (eg. see the additional logic in nn.Module.__call__). Although most people don't play around with hooks...
Dataloader starving the gpu
[ "bug", "help wanted", "won't fix" ]
Hey, Thank you for this amazing library ! I'm using pytorch_lightning to train a segmentation model on 3D images. Augmentation on these images is quite slow, mostly because I do full volume elastic transforms, which takes ~2s per image on a single cpu. I'm running a large unet with precision 16 and amp_level 'O2', I'm ...
tensorboard hyperparameters don't update
[ "bug", "help wanted" ]
πŸ› Bug Given two sets of HPARAMS, h_1 and h_2 where h_1 is a strict subset of h_2. If you run pytorch lightning with parameters h_1, then h_2, the additional parameters from h_2 are not shown in tensorboard If you run pytorch lightning with parameters h_2, then h_1, the missing parameters from h_1 are shown empty in t...
WandbLogger does not log test results
[ "bug", "help wanted", "logger" ]
πŸ› Bug The WandbLogger does not log test results when testing happens right after training. To Reproduce When running the MNIST example with the WandbLogger like in the following snippet, the test results do not get logged to wandb because it syncs before testing starts: Code sample ... trainer = pl.Trainer(log...
Save checkpoing under the lightning_logs/version_X/ directory
[ "duplicate", "help wanted", "question" ]
πŸ› Bug After running training the output file structure looks like epoch=9_vl_val_loss=10.10.ckpt lightning_logs/ β”œβ”€β”€ version_0 β”‚ β”œβ”€β”€ events.out.tfevents.1585053395.dltn.22357.0 β”‚ └── meta_tags.csv but the expected file structure looks like lightning_logs/ β”œβ”€β”€ version_0 β”‚ β”œβ”€β”€ events.out.tfevents.1585053395.dltn....
How to log hparams to Tensorboard?
[ "question" ]
Hello! I'm trying to view my hparams on tensorboard, but can't actually see them there. As I understood from documentation, to log hparams one should add self.hparams in the init of the LightningModule. Here's what I'm doing: class MyModule(pl.LightningModule): def __init__(self,hparams): super().__init__()...
data preprocess tool using hdf5 or tfrecord
[ "feature", "help wanted", "won't fix", "design", "3rd party" ]
πŸš€ Feature A subpackage or tool using hdf5 or tfrecord to preprocess data into one single file. Motivation In some field like asr or cv, it is not very novel to just use pytorch dataloader because it may cause speed loss in online data process like making fbank feature(asr) or some transforms(cv). And hdf5 or tfrecord ...
AdvancedProfiler error
[ "bug", "help wanted" ]
Hi, as others have pointed out, the Profiler doesn't seem to work (it prints nothing), and trying out the AdvancedProfiler as in https://pytorch-lightning.readthedocs.io/en/latest/profiler.html like: from pytorch_lightning.profiler import AdvancedProfiler profiler = AdvancedProfiler(output_filename="prof.txt") ...
Validation progress bar with metrics
[ "feature", "help wanted", "good first issue", "let's do it!" ]
πŸš€ Feature Logging validation metrics throughout the duration of validation (e.g., current batch loss/acc or average batch loss/acc so far). Motivation If the validation set is large, it'd helpp to know right away if I've loaded the wrong checkpoint or loaded a checkpoint in the wrong way within a couple iterations, ra...
Allow callbacks to access internal variables of training loop
[ "feature", "help wanted", "question", "won't fix", "let's do it!" ]
πŸš€ Feature Internal variables (batch, predictions, etc) of the training loop (training + validation step) should be made transparent to callbacks. Right now, there is no way to access these internal variables of the training loop through callbacks without making them as attributes of lightning module. This doesn't soun...
add TPU tests
[ "feature", "help wanted", "good first issue", "ci", "accelerator: tpu" ]
πŸš€ Feature we shall also cover TPU usage as we are supporting it Motivation now all changes are tested for GPUs and CPU but we do not have a check for TPU yet Pitch getting coverage back to ~99%
[metrics] Automatic reduction of metrics from several validation steps
[ "feature", "help wanted", "discussion" ]
πŸš€ Feature As per the slack, it could be cool to implement this. More detail below. Motivation To avoid the user having to do this logits = torch.cat(x['logits'] for x in output) labels = torch.cat(x['labels'] for x in output) and so on ... Pitch Something like this: def collate_metrics(self, output): """...
Validation every epoch with non-finite dataloader
[ "feature", "help wanted" ]
πŸš€ Feature Providing a way to do validation every epoch with non-finite (__len__ not implemented) dataloaders. Motivation Doing validation every epoch is a natural choice, and with finite dataloader you can do it easily by setting val_check_interval=1.0. However, with non-finite dataloader you cannot set val_check_in...
Support for passing dataloader to trainer.test()
[ "duplicate", "feature", "help wanted" ]
πŸš€ Feature dl = DataLoader(...) trainer = Trainer(...) trainer.test(model, dl) Motivation In most cases of mine, I got a fixed training/validation dataset but other collaborators would send me different datasets to evaluate my model. In which case, I hope not to modify the code inside the CoolSystem. I prefer to build...
better checking of data returned from training_step
[ "feature", "good first issue", "won't fix" ]
πŸš€ Feature let's add more validation checks on what's returned from training_step and provide the user with useful error messages when they're not returning the right values. Motivation i feel like i've seen a lot of users confused about what they're supposed to return in training_step and validation_step. additionally...
Neptune.ai logger slow, lags behind training
[ "bug", "help wanted", "logger" ]
πŸ› Bug When running a script which trains multiple models after another, I came across the problem that the neptune.ai logger lags behind my training quite severely ( when the model has finished training, the logger is only about halfway there). This would not be such a big problem for me if the next model would start...
incorrect run on the test set with overwritten validation_end and test_epoch_end
[ "bug", "help wanted" ]
πŸ› Bug If I override validation_end and test_epoch_end, TrainerEvaluationLoopMixin.evaluate works incorrectly on the test set Suppose we override validation_epoch_end and test_end, but not validation_end and test_epoch_end. (I actually did this since I am a newbie and haven't yet figured out how everything works; also ...
model summarize can not log to stdout
[ "question" ]
pytorch-lightning/pytorch_lightning/core/lightning.py Line 1446 in ac6692d log.info('\n' + model_summary.__str__()) during training, the default model summarize function can not log the model summary information to stdout...
bug(tqdm): creating multiple rows after 70%
[ "bug", "help wanted" ]
πŸ› Bug When I run a training loop, every epoch it would do tqdm well until roughly 70%, and then create more rows until finished: Expected behavior No new tqdm rows Environment cuda: GPU: GeForce GTX 1080 Ti available: True version: 10.1 packages: numpy: 1.18.1 pyTorch_debug:...