title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
Train in run_pretrain_routine?
[ "feature", "help wanted" ]
I see the train() is run in run_pretrain_routine, doesn't it look wierd? At least violating the function name. pytorch-lightning/pytorch_lightning/trainer/trainer.py Line 1003 in d4a02e3 def run_pretrain_routine(self, model: L...
Fix horovod tests that try to access filepath on global rank > 0
[ "bug", "help wanted", "priority: 0", "ci" ]
πŸ› Bug We had to skip two tests in #2425, namely test_horovod_cpu test_horovod_cpu_implicit Problem is that since they run in ddp and the test tries to access the trainer internal variable for the checkpoint path, it gets a NoneType error when trying to os.join() None paths. To Reproduce Steps to reproduce the behavi...
`validation_epoch_end` and `test_epoch_end` can't return nothing
[ "bug", "help wanted", "good first issue" ]
πŸ› Bug If validation_epoch_end or test_epoch_end returns nothing (as presented as an option in the documentation), an error occurs. (Happy to work on a PR to fix this) To Reproduce Steps to reproduce the behavior: Overwrite test_epoch_end and remove return (same for validation_epoch_end File "/.conda/envs/PPI-env/lib...
validation_epoch_end needs to return CUDA tensors
[ "bug", "help wanted" ]
πŸ› Bug I'm not sure if this is expected behaviour or not, but upgrading to the latest version (from 0.8.1) caused my validation_epoch_end to break. It appears that a CUDA tensor is expected for the metric where before the tensor was device agnostic. This was using sklearn's roc_auc_score. I haven't yet got around to t...
Test function
[ "question" ]
Hello, My model was worked well on version 0.7.3. Then, I have tried to update the version of pytorch-lightning to 0.8.3. I have re-trained and tested my model. The training have passed well, but in method test(), I have meet an error: Traceback (most recent call last): File "src/main_cv.py", line 283, in <module> ...
training_epoch_end log output gets combined with next epoch training
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug So, I put 'training_epoch_end' function in my LightningModule. I have it return this dictionary {'log':{'train_loss': tensor(0.3616, device='cuda:0'), 'epoch': 0} I check the run_training_epoch_end function in the Pytorch library, it looks like it is working normally as log_epoch_metrics is showing the 'log' par...
Wandb Flatten Dict
[ "bug", "help wanted", "logger" ]
Wandb logger should flatten the dictionary of parameters before logging. Every other logger has the bellow pattern of code: params = self._convert_params(params) params = self._flatten_dict(params) πŸ› Bug Wandb logger does not flatten parameters resulting in dictionaries being logged to Wandb, which are not searchab...
Trainer auto_lr_find flag cannot be set to boolean through argparse
[ "bug", "help wanted" ]
πŸ› Bug It seems that the Trainer auto_lr_find flag can only be set to str type via argparse. If it is not a boolean, Trainer will seek the finder with variable name defined via --auto_lr_find instead of using the default lr_finder. It's not a big issue as we can work around this by some processing after parsing the arg...
WandB Logger always resumes even when `Trainer(resume_from_checkpoing=None)`
[ "bug", "help wanted", "logger" ]
πŸ› Bug @awaelchli As requested, bug above in the title. To Reproduce Steps to reproduce the behavior: model = CoolSystem.load_from_checkpoint(checkpoint) logger = WandbLogger() trainer = Trainer(resume_from_checkpoint=None) trainer.fit(model) The above resumes the wandb run. PL 0.8.4 PyTorch Version (e.g., 1.0): 1.6 n...
Model parallelism in Multi-GPUs
[ "question", "won't fix" ]
Hi everyone! Does lightning has any way to split very large networks between GPUs like in the example in link? https://discuss.pytorch.org/t/model-parallelism-in-multi-gpus-forward-backward-graph/27679/4 Thanks!
How can I perform only validation without training
[ "question", "won't fix" ]
❓ Questions and Help It seems the metric 0.8737 in the checkpoint 'm10-f1_1=0.8737.ckpt' can not be found in progress_bar, I want to load the .ckpt to perform validation without training, How should I config the trainer?
Trainer.scale_batch_size requires model.batch_size instead of model.hparams.batch_size
[ "bug", "help wanted", "good first issue" ]
πŸ› Bug Trainer.scale_batch_size only works if a model has the batch_size property and does not work with model.hparams.batch_size even though all documentation points to the reverse. To Reproduce All of my hyperparameters are available as model.hparams like suggested in the documentation: (hyperparameters, option 3. T...
Initialising model in setup not compatible with auto_scale_batch_size / auto_lr_find
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Define your model in setup() as per the introduction guide's recommendation for when the model depends on the dataset https://pytorch-lightning.readthedocs.io/en/latest/introduction_guide.html#models-defined-by-data Try to use either auto_scale_batch_size / auto_lr_find File "/home/frankier/.cac...
Dynamic Data Loaders
[ "feature", "help wanted", "won't fix" ]
πŸš€ A dataloader with changable sampling behavior A DataLoader that takes in a confusion matrix, or class specific recall at the end of the epoch. It then oversamples classes that are the least known to the network in the next epoch and hereby helps the network train more efficiently. Motivation I often work with highly...
Create base TPU image
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature Create a base Docker image with TPU requirements and all PL dependencies - base + extra Build this docker with a week crone and push to PL docker hub... Motivation Speedup TPU tests, because the repetitive build takes about 7min Pitch It could be used also by other developers Additional context See Docker bu...
CI: proper Conda caching
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature Fix the Github action test fro Conda, it seems that the environment is always made from search which takes about 8min even according action readme we shall cache the environment... Motivation Significantly speed-up CI tests using Conda https://github.com/goanpeca/setup-miniconda#caching
Training slows down with long epoch
[ "question" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? I'm doing BERT transfer-learning on a single GPU (the same happens with 2 or 4 GPUs...) and on a large dataset. Each epoch has about 1.7M steps and training speed linearly slows down such that at some point, the estimated ...
Training with DataParallel (DP) is broken
[ "bug", "help wanted" ]
πŸ› Bug Currently, the distributed training backend DataParallel (DP) seems to be broken. Using DP will result in error TypeError: zip argument #1 must support iteration. Below is the last few lines of the call stack: File "/home/ubuntu/anaconda3/envs/trfm/lib/python3.7/site-packages/pytorch_lightning/overrides/data_p...
Pass parameters on train_step/validation_step/test_step
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature As title Motivation I'm currently working with Seq-to-seq architecture, which requires a variable called max_length when decoding outputs. I mean, while training, it could be fixed as a model hyperparameter. However, during testing, we could vary its value to make predict longer or shorter in need. Therefore...
AttributeError on using multi gpu even on using ddp
[ "bug", "help wanted" ]
πŸ› Bug I am getting attribute not found error on using multi gpu. The code works fine on using a single gpu. I am also using ddp as suggested. Here's a traceback. Traceback (most recent call last): File "/home/sohigre/STL/stl_bert_trial_lightning.py", line 245, in <module> trainer.fit(model) File "/home/sohigr...
How to prepare list of files for dataloader, avoiding the duplicated work?
[ "question", "won't fix" ]
I do self.image_paths = sorted(Path(self.hparams["image_path"]).rglob("*.jpg")) in setup and use self.image_paths to initialize the data loader in train_dataloader I have 10m+ files and rglob takes some time. My model is trained on 4 GPUs and as I understand I do rglob 4 times. What is the best way to do it, so that i...
How to disable Detected KeyboardInterrupt
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: I use pycharm Enter F5 key or click pycharm debug Epoch 1: 77%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 27/35 [00:12<00:03, 2.08it/s, loss=6.617, v_num=16, train_loss=5.91]/home/blake/anaconda3/envs/torch/lib/python3.7/site-packages/pytorch_lightning/utilities/distributed.py:25: UserWarning...
checkpoint save dir is not correctly set when _save_dir is given by wandb logger
[ "bug", "help wanted", "logger" ]
πŸ› Bug When using ModelCheckpoint with default parameter and Wandb Logger with save_dir set to some dir, The checkpoint is still dumped to os.getcwd() To Reproduce ........ logger = WandbLogger(save_dir='/path/to/experiment') trainer = Trainer.from_argparse_args(other_args, logger = logger) Expected behavior The checkp...
TypeError with multiple validation loaders and overfit_batches
[ "bug", "help wanted" ]
πŸ› Bug A TypeError when using multiple validation datasets and overfit_batches != 0 To Reproduce Steps to reproduce the behavior: Use multiple val_dataloaders Use overfit_batches != 0, e.g. overfit_batches=0.5 Code sample https://colab.research.google.com/drive/1BtQBCoP5fK-aZm_2uLMOUbf2c9cu-yFb?usp=sharing Traceback...
format_checkpoint_name takes global step
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Since there is val_check_interval, the model checkpointer could also be enhanced to save in the form of, for example checkpoint_iter2000.ckpt, by defining checkpoint_callback = ModelCheckpoint(filepath='checkpoint_iter{global_step}') https://github.com/PyTorchLightning/PyTorch-Lightning/blob/master/pytorch_l...
Save checkpoint and validate every n steps
[ "feature", "help wanted" ]
❓ Questions and Help How to save checkpoint and validate every n steps. I saw there is a val_check_interval, but it seems it's not for that purpose.
TPU fp16 requires apex installed
[ "bug", "help wanted" ]
When I tried to use precision=16 on TPU, pytorch-lightning is trying to find amp, which is unnecessary. The backtrace is GPU available: False, used: False TPU available: True, using: 8 TPU cores Traceback (most recent call last): File "bert_ner/light/fp16_debug.py", line 16, in <module> trainer = pl.Trainer(tpu_c...
Possible Bug for multiple optimizers, require_grads=False
[ "bug", "help wanted" ]
πŸ› Bug Hey first of all, I really like your repository (great work). I am using your framework to train two models at the same time, i.e. I followed the GAN example. This means that I have also 2 optimizers. The problem that I am facing is the error: line 99, in backward allow_unreachable=True) # allow_unreachable fla...
Support easily reading and writing checkpoints from different systems instead of just to disk
[ "duplicate", "feature", "help wanted" ]
πŸš€ Feature I want to easily be able to extend model_checkpoint so that it can be used with blob stores instead of always writing to disk. One way to do this is to abstract to replace the calls to os with an interface. Motivation I want to read and write my files from a blob store, not to disk. Pitch I want to easily b...
apex amp state dict
[ "bug", "help wanted", "good first issue" ]
πŸ› Bug pytorch-lightning/pytorch_lightning/trainer/training_io.py Line 310 in 25ee51b if self.use_amp and NATIVE_AMP_AVALAIBLE and 'native_amp_scaling_state' in checkpoint: It seems for native amp support, ...
trainer.test(model) on 0.8.4 with a checkpoint saved in 0.6.0 expects attribute 'checkpoint_callback_best_model_path'
[ "bug", "help wanted", "priority: 0", "waiting on author" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: Save a checkpoint in 0.6.0 Load the model in 0.8.4 (no problem) model = My_model.load_from_checkpoint(checkpoint_path) Run trainer.test(model) See error --------------------------------------------------------------------------- KeyError ...
Simultaneously run multiple optimizers
[ "feature", "help wanted", "won't fix" ]
Current optimizers are run sequentially. However, there are cases that two optimizers are used optimized different parts of the network, for example, a CNN+LSTM. one would want CNN to be updated SGD and LSTM to be updated Adam.
Is it possible to plot val and train losses on the same figure?
[ "question", "logger" ]
I could not find the way to do it, maybe there is something I've been missing.
Logging loss is extremely large when training with fp16
[ "bug", "help wanted" ]
πŸ› Bug Logging loss is extremely large(e.g. 10000) when training with fp16, but I find that the loss in tfboard is quite normal(e.g. 1.0). I guess the loss in logging does not discard unstable training steps? Expected behavior Loss in logging should be approximate between turning on/off fp16. Also, logging loss shou...
Flush Events for Tensorboard Logger
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Add an option to automatically flush events every epoch for TensorBoardLogger Motivation TensorboardX and Tensorboard only refresh once their "buffer" is filled. Often, the user would like to see an update before the buffer is filled (say, after every epoch). In tensorboardX/tensorflow, this is accomplished ...
Resume and load optimizer from ckpt
[ "question" ]
What is your question? I'm trying to load checkpoint following this https://pytorch-lightning.readthedocs.io/en/latest/weights_loading.html#checkpoint-loading. And it seems to me that state_dict is loaded to model without problems, however, optimizer_states does not seem to be loaded. Code import os import pytorch_lig...
Unable to launch multiple gpus nodes
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug I'm having trouble launching multiple GPU nodes with pytorch-lightning-0.8.5-dev. I'm getting the following error Traceback (most recent call last): File "/home/jmorton/miniconda3/envs/alignment/bin/deepblast-train", line 7, in <module> exec(compile(f.read(), __file__, 'exec')) File "/home/jmorton/resear...
Log debug information to a file
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature Create a log file for each run with debug information. Motivation User can read the logs and understand in which order things got executed. For example, logs could contain this information: prepare_data_called setup called started train loop training step called backward called zero grad called etc... Pitch ...
ddp no_sync
[ "feature", "help wanted" ]
πŸš€ Feature DDP has this no_sync context manager. https://pytorch.org/docs/stable/nn.html?highlight=no_sync#torch.nn.parallel.DistributedDataParallel.no_sync Could be good to add it when doing gradient accumulation.
The documents about override backward()
[ "won't fix", "docs" ]
I'm` trying to set retain_graph=True in loss.backward() The document says backward function could be overridden like: https://pytorch-lightning.readthedocs.io/en/0.7.6/introduction_guide.html#extensibility class LitMNIST(LightningModule): def backward(self, use_amp, loss, optimizer): # do a custom way o...
batch sampler set_epoch is not called
[ "won't fix" ]
pytorch-lightning/pytorch_lightning/trainer/training_loop.py Line 353 in 1d565e1 self.train_dataloader.sampler.set_epoch(epoch) In the line above batch_sampler can be used to generate batches, but it is impossible to use ...
Unable to run one of the domain_templates
[ "bug", "help wanted" ]
πŸ› Bug `` Hi. I've tried to run computer_vision_fine_tuning.py from domain_templates on Kaggle. After first epoch(5) in the second stage, there was an error. Error /opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/callback_hook.py in on_epoch_start(self) 55 """Called when the epoch begins....
Trainer flag overfit_batches does not overwrite train dataloaders shuffle flag
[ "bug", "help wanted", "won't fix" ]
πŸ› Bug Setting the trainer flag overfit_batches (e.g. =10) does not overwrite the shuffle flag set in the training dataloader, even though the warning reads: UserWarning: You requested to overfit but enabled training dataloader shuffling. We are turning it off for you. To Reproduce Steps to reproduce the behavior: Cr...
How to get the dictionary returned by the validation_epoch_end() method?
[ "question", "won't fix" ]
❓ Questions and Help What is your question? How do I fetch the dictionary returned by the validation_epoch_end() method? I don't want to add this to tensorflow logs Code return {"label_freq": label_freq, log: {"loss": loss}} I want to get the value of label_freq What's your environment? OS: [e.g. iOS, Linux, Win] ...
Why is `accumulate_grad_batches` set to 1 for first epoch when the argument is provided as int?
[ "question" ]
πŸš€ Feature Is there any reasons why is it set to 1 for first epoch? I think it should be set to the number users specify because of a lot of confusing. Alternatives Change the key of schedule dict to 0 in training_tricks.py: def configure_accumulated_gradients(self, accumulate_grad_batches): if isinstance(accumulat...
Minimal example with custom RNNs cells and sliding window support
[ "good first issue", "question", "won't fix", "example" ]
❓ Questions and Help Hello, I would like to ask you about the support of pytorch-lighting regarding custom RNN cells and sliding window predictions of a sequence ( i.e. video) I am implementing a kind of conv-LSTM variant for video prediction, where each frame is estimated from the previous frames in a sliding window m...
How to logging custom metrics during training, validation, and testing?
[ "question" ]
Following the Implement a metric documentation I implemented the following metric: class MRRMetric(TensorMetric): def forward(self, x1, x2): """ Return the mean reciprocal rank using x1 as query and x2 as retrieved results. :param x1: batch of queries embeddings. :param x2: batch of ...
Trainer(resume_from_checkpoint=...) does not load optimizer state
[ "bug", "help wanted" ]
πŸ› Bug The optimizer state is not loaded from the checkpoint. This is important if you want to correctly continue training. In practice, I had serious convergence issues if the optimizer state wasn't loaded. To Reproduce See code sample Code sample import os import torch from torch.nn import functional as F from torch...
torch.save(model) in dump_checkpoint
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Current PL save the only weight of network, but param searched network change their architecture flexibly while training. Trainer.(..., save_entier_model=True) could be option Motivation After training 'param searched network' with pl and closed session, recognized I cannot load weight! :( Saving the only s...
set_epoch isn't called for TPU training
[ "bug", "help wanted", "accelerator: tpu" ]
πŸ› Bug This line https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pytorch_lightning/trainer/training_loop.py#L350 doesn't call set_epoch when training on TPUs unless using self.use_horovod as self.use_ddp is False.
Data is on GPU, some of the nn.Module still on CPU
[ "bug", "help wanted" ]
πŸ› Bug A pytorch module consisting of a Python list containing multiple nn.Conv1d objects and 3 Fully-Connected layers are raising the following error: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same When trying to do a forward pass on the first nn.Conv1d contain...
What's the true meaning of gpus argument in Trainer
[ "good first issue", "docs" ]
Hello, I have a question to the "gpus" argument for the Trainer() class. I don't think the documentation is clear in my view. From the docs, I only know that "gpus: Which GPUs to train on.". However, from the link here: https://pytorch-lightning.readthedocs.io/en/latest/multi_gpu.html, I get different answer. 1. DEFAUL...
Expose load_state_dict strict=False
[ "feature", "help wanted" ]
πŸš€ Feature In contrastive learning, we normally train a representation learning backbone then adding the classifier. Sometimes, I wish to play with different classifiers. It would be best to have strict=False exposed to make the model still load with a user's permission. model = TransferLearningModel.load_from_checkpoi...
Custom callbacks are lost after resume_from_checkpoint
[ "bug", "help wanted" ]
πŸ› Bug A checkpoint should state everything needed to restore a training session including the state of all callbacks. However, custom callbacks are lost after resuming the trainer from Trainer(resume_from_checkpoint="...ckpt"). To Reproduce Steps to reproduce the behaviour: Define a trainer with a custom callback. t...
to() got an unexpected keyword argument 'non_blocking' for DGLGraph
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce I use dgl library to make a gnn and batch the DGLGraph. No problem during training, but in test, I got a TypeError: to() got an unexpected keyword argument 'non_blocking' <class 'dgl.graph.DGLGraph'> .to() function has no keyword argument 'non_blocking' Code sample Expected behavior Environment ...
Where to use `to(self.device)` ?
[ "question" ]
What is your question? So currently in my lightning module, I create the multibox module inside the training_step, in order to send the priorboxes to the right device, but when I define the multibox loss in the __init__ method, the priors doesn't seem to be in the right device. Code for eg. below works:- class SSD_simp...
Apex with auto lr finder fails
[ "bug", "help wanted" ]
Hi! I have this error when using together flags precision=16, auto_lr_find=True in Trainer . The error occurs when learning rate finder finish and before training starts. I guess it can happen because of calling amp.initialize for already initialized model i.e the first one it is called when for auto_lr_find and then t...
Trouble tracing why convergence is slower in Lightning
[ "bug", "help wanted" ]
I recently refactored some code from [this tutorial])(https://www.assemblyai.com/blog/end-to-end-speech-recognition-pytorch) (trains speech-to-text using librispeech 100 hr) into Lightning and found it to be converging slower and never reaching the same level of loss. I made a lot of changes when I refactored into Pyto...
Adding a `warmup` period to EarlyStopping and ModelCheckpoint
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Add an optional warmup period for EarlyStopping and ModelCheckpoint callbacks. Motivation Sometimes the metric you want to monitor can take a number of epochs to stabilize and become meaningful. For example: with GANs, you might want to monitor and minimize G's loss, but usually it starts out unreasonably lo...
How to perform inference on multiple GPUs?
[ "question", "won't fix" ]
I have 4 GPUs, 1m images. I would like to use all 4 of them to run the method test. Is there any tutorial that shows how to do this?
Checkpoints cannot be loaded in non-pl env
[ "bug", "help wanted" ]
## πŸš€ Feature Add an option to save only state_dict for ModelCheckpoint callbacks πŸ› Bug PL checkpoints cannot be loaded in non-pl envs Motivation To be able to move trained models and weights into pytorch only environments Additional context Currently when you do torch.load() on a pl generated checkpoint in an enviro...
How to reload partial weights from the trained checkpoint?
[ "question" ]
What is your question? Now I want to reload partial weights from trained checkpoint and let the remaining parameters trained from scratch. But I didn't find the api that allows me to reload partial parameters. Code my code is like this, I find pl only support resume_from_checkpoint path to reload all weights from check...
Pytorch lightning switched to cpu in the middle of training. How can I debug this?
[ "bug", "help wanted" ]
πŸ› Bug So I am training a model and suddenly PyTorch lightning started using the CPU. I checked the GPU status and it is working normally. I have never seen this before. Can Pytorch start using CPU in the middle of training (at the next epoch)? The weird thing is the next epoch was using the gpu. EDIT: Some asked how...
Logging gradients and a possible bug in the docs?
[ "question", "won't fix" ]
TL;DR Question: How do we correctly log gradients to tensorboard? I am trying to log gradients to tensorboard to track NaN loss in a speech application. The docs suggest using on_after_backward along with this code that appears to be incorrect: # example to inspect gradient information in tensorboard if self.trainer.g...
Document ddp_cpu
[ "feature", "good first issue", "won't fix", "docs" ]
πŸ“š Documentation For typos and doc fixes, please go ahead and: Create an issue. Fix the typo. Submit a PR. Thanks!
Plotting learning rate from a lr_scheduler via a Callback
[ "feature", "good first issue", "question" ]
I think the title explains a lot. But let me elaborate, I have a LightningModule which has a configure_optimizers method returns an optimizer and a scheduler. Later in a Callback I have a on_batch_end function in which I try to log the learning rate. Of course if the scheduler was accessible as a class member, we could...
bug in pytorch_lightning.metrics.functional.auroc
[ "bug", "help wanted" ]
the code: def validation_epoch_end(self, outputs): ......... print(total_y_hat.device) print(total_y_true.device) print(total_y_hat) print(total_y_true) print(total_y_hat.shape) print(total_y_true.shape) auc_score = auroc(total_y_hat, total_y_true) the ou...
trainer.test after fp16 training with apex
[ "bug", "help wanted", "priority: 0" ]
Summary In the current huggingface examples/seq2seq/finetune.py, trainer.test fails in fp16 mode with torch 1.5.1. Nowhere in the huggingface code is model.half called. Models maybe saved to disk in either fp16 or fp32 format, but since we are resuming from a pl checkpoint, I think pl is controlling the saving and loa...
Default checkpoint location problematic when using docker
[ "bug", "help wanted" ]
The default behavior of ModelCheckpoint is to use os.getcwd(). Outside my docker container, this ended up being the same directory where my tensorboard logs were saved (e.g. /my/dir/tb_logs/default/version_0/checkpoints/). But inside the docker container, it saved to the internal working directory (e.g. /home/default/...
trainer.test not working in ddp
[ "bug", "help wanted", "distributed" ]
πŸ› Bug Testing in ddp is not working in the latest master. To Reproduce I am using the gpu_template example from basic_examples in the repo : "python gpu_template.py --gpus 2 --distributed_backend ddp", where, instead of trainer.fit(model), I am using trainer.test(model). I am getting "RuntimeError: connect() timed out...
training_epoch_end() only gives outputs from one optimizer when multiple optimizers are being used
[ "feature", "won't fix" ]
I'm making an abstract GAN class, where I have the following code: def configure_optimizers(self): return self.g_optimizer, self.d_optimizer def training_step(self, batch: Tuple[Tensor, Tensor], batch_idx, optimizer_idx) -> Dict: X, _ = batch batch_size = X.shape[0] z = self.Z...
Subprocess launched in ddp have the wrong cwd when using hydra.
[ "bug", "help wanted" ]
πŸ› Bug Details: #2639 (comment). I've talked to @omry about the issue and I will send out a fix soon. To Reproduce Please see the comment I posted above. Expected behavior The CWD for subprocesses should be the same as that of the parent, and relative paths should work.
Can't find PyTorch 1.6
[ "question" ]
❓ Can't find pytorch 1.6 Before asking: search the issues. search the docs. I tried to use native amp and the document says I need to have PyTorch 1.6. I went to PyTorch's website and can only find 1.5.1. I tried to install the Preview (Nightly) the error popped up: torchvision 0.8.0 has requirement torch ==1.7.0 bu...
All TPU cores create tensorboard logs
[ "bug", "help wanted", "accelerator: tpu", "waiting on author" ]
πŸ› Bug With TPUs, TestTubeLogger writes many empty tensorboard logs, one log per TPU core except one. This confuses tensorboard and prevents it from updating. This is happening because the logger is created before spawning processes then the logger is replicated in each process. To Reproduce Train any model with ptl.Tr...
lightning and apex amp performance not improved
[ "question" ]
❓ lightning and apex amp performance not improved Before asking: search the issues. search the docs. I'm trying to use lightning and Apex amp to speed ddp training. I tried amp_level O0, O1, O2, and O3, and they use almost the same time (all around 45 minutes). train_loader = DataLoader(dataset=train_dataset, batch...
`replace_sampler_ddp` doesn't create a shuffled sampler
[ "bug", "help wanted", "distributed" ]
πŸ› Bug The DistributedSampler created using replace_sampler_ddp is not shuffled. Check the kwargs here Expected behavior If training dataloader, create a shuffled DistributedSampler, else create a non-shuffled sampler. Even though the train flag is passed to the function here, it is ignored. Environment pytorch-lightni...
Weight tying is broken on TPUs leading to silent errors
[ "bug", "feature", "help wanted", "accelerator: tpu" ]
πŸ› Bug PyTorch/XLA documentation mentions here that weight tying should happen after moving tensors to XLA, otherwise the tensors are copied. This is a silent error that can easily go undetected (thanks to @matt-peters for pointing it out), and it would be good if PL guards the user against it. Notice that weight tying...
Improve Design of DataModule
[ "feature", "help wanted" ]
πŸš€ Feature Motivation Since the introduction of datamodules we have duplicated code. For example, the train/val/test_dataloader methods in LightningModule and DataModule have the same name, same docs, same signature but live in two files. Also the argparse methods are copy pasted from Trainer, it will become impossibl...
Live long and prosper
[ "question" ]
This issue is just to thank the PyTorch Lightning team. Even though my beginner knowledge over Pytorch and other Deep Learning frameworks, I was able to easily implement a quite complex model that is clearly written and completely reproducible. I hope to contribute in some way in the near future.
TensorBoard logging in validation_step and test_step
[ "question" ]
Even defining the log in all steps of the PL model: def training_step(self, batch, batch_idx): ... # TensorBoard logging log = {"train_loss": train_loss, "train_mrr": train_mrr} return {'loss': train_loss, "log":log} def test_step(self, batch, batch_idx): ... # T...
Horovod and Native Amp not work
[ "bug", "help wanted" ]
##πŸ› Bug I'm no sure if there is a bug. But when I was tryng to use Horovod as backend to do native amp in PyTorch 1.6, Lightning always points to the function that uses apex amp instead. To Reproduce Steps to reproduce the behavior: Go to 'trainer/distrib_parts.py' Run '....' Scroll down to '....' See error Code s...
AttributeError: module 'pytorch_lightning' has no attribute 'LightningDataModule'
[ "bug", "help wanted" ]
Name: pytorch-lightning Version: 0.8.5 following latest doc: https://pytorch-lightning.readthedocs.io/_/downloads/en/latest/pdf/
how to resume in the same folder
[ "question", "won't fix" ]
when i launch the pytorch lightning trainer fit(), to resume it seems you can pass one old checkpoint, but i am not sure if it is possible (or perhaps it is chosen not to) to simply resume the current training in the same "version" sub-folder?
Use pytorch-like `ignore_index` instead of `remove_bg` for IoU
[ "feature", "help wanted" ]
πŸš€ Feature PL's implementation of the IoU metric has a remove_bg flag that allows to ignore the background class. I propose changing it to be the same as PyTorch's loss functions, that is an ignore_index argument that takes the index of the class to be ignored. Motivation for the proposal Currently when remove_bg is se...
[DataModule] Datamodule setup in docs shows non-existent stage arg
[ "help wanted", "docs" ]
πŸ› Bug To Reproduce Colab Minimal code Expected behavior DataModule should call prepare_data and setup Environment Colab Additional context PL Version: 0.9.0rc3
How to make stages?
[ "feature", "question", "discussion", "working as intended" ]
Hi! Thank you for a great framework! I've tried to write down stages for training. E.g. in my config: <<: *default stage2: <<: *default datasets: <<: *datasets # 240k root: ['path1', 'path2, 'path3] per_folder_ratio: [1.0, 1.0, 1.0] transfor...
0.8.1 keeps writing into "version_0" folder instead of creating new version_1/2/3...
[]
0.7.6 (I believe) would properly create a new "version_X" folder per run, but since upgrading to 0.8.1, it no longer does this. Here's my logging-related code in my train script, which are then passed onto Trainer: # custom logging directory logger = pl.loggers.TestTubeLogger( save_dir=logging_dir, ...
Docs : Introduction Guide, test_dataloader wrong sequence length in random_split
[ "docs" ]
Docs : Introduction Guide, test_dataloader For the MNIST dataset, The training set contains 60000 examples, and the test set 10000 examples. while creating the test_dataloader, in the code mnist_train = MNIST(os.getcwd(), train=False, download=False, transform=transform) the train is set to false, so it reads from the...
Add a test case for running trainer.test without trainer.fit on DDP
[ "bug", "help wanted", "ci", "distributed" ]
πŸ› Bug Running trainer.test(model) using DDp without running trainer.fit hangs. To Reproduce import pytorch_lightning as pl import torch from torch.utils.data import DataLoader, Dataset class RandomDataset(Dataset): def __init__(self, num_samples=100, dim=5): self.num_samples = num_samples self.dim ...
to_categorical should go before get_num_classes in metrics/functional/classification.py
[ "bug" ]
pytorch-lightning/pytorch_lightning/metrics/functional/classification.py Lines 174 to 178 in d18b9ef num_classes = get_num_classes(pred=pred, target=target, num_classes=n...
Speed Drastically Decrease Under Horovod
[ "bug", "help wanted", "won't fix", "3rd party" ]
πŸ› Bug If the backend is set to horovod, training speed will drop drastically. To Reproduce Steps to reproduce the behavior: I run the training job in docker, and the Dockerfile is shown below: FROM nvidia/cuda:10.1-devel-ubuntu18.04 # TensorFlow version is tightly coupled to CUDA and cuDNN so it should be selected c...
Multiple Model Multiple Loss Unsupervised Training
[ "question", "won't fix" ]
Hi, recently I have been trying to standardize on of our research models which led me to Lightning. I have a situation around Multi-model multi loss training, which is described in the below post: https://discuss.pytorch.org/t/multiple-networks-multiple-losses/91130?u=pavanmv Please let me know if this can be achieved ...
Pass second-order closure to all optimizers (not just LBFGS)
[ "feature", "help wanted" ]
πŸš€ Feature I could be wrong, but I noticed the following in the code of lightning module's optimizer_step if on_tpu: xm.optimizer_step(optimizer) elif using_native_amp: self.trainer.scaler.step(optimizer) elif using_lbfgs: optimizer.step(second_order_closure) ...
Different loss functions for training and validation
[ "question", "won't fix", "waiting on author" ]
I am having a strange problem where I am unable to use different loss functions for pytorch lightning. So, if I use something like this, it is completely fine: class SegmentationModel(pl.LightningModule): def __init__(self, hparams: dict): self.lossfn = GeneralizedDiceLoss() def training_step(self, bat...
EvalResult doesn't do mean_of_gpus if using TensorMetric
[ "bug", "help wanted" ]
I want to use new AccuracyMetric, it can automatically sync in ddp, but it doesn't divide by word_size. In manually mode, I can divide it by word_size by hand in validation_epoch_end. But if I use EvalResult, how to do this? It only do mean across batches, but no across gpus. This is original code: def validation...
how to run m validation batches after running every n training batches?
[ "help wanted", "question" ]
πŸš€ Feature For example, I'm runing a model on a big dataset. After every 10000 training batches, I'd like to run 1000 validation batches to check the avg_traning_loss and avg_val_loss. I tried val_check_interval but it just run all validation dataset, which is too big and time consuming. How to validate only part of t...
Issue with running multiple models in PyTorch Lightning
[ "bug", "help wanted" ]
πŸ› Bug I am developing a system which needs to train dozens of individual models (>50) using Lightning, each with their own TensorBoard plots and logs. My current implementation has one Trainer object per model and it seems like I'm running into an error when I go over ~90 Trainer objects. Interestingly, the error only...
NumpyMetric not mapping back to GPU in multi-GPU training
[ "bug", "help wanted" ]
πŸ› Bug I created a NumpyMetric class for an involved metric that requires numpy operations; however, the metric fails when training on multiple GPUs. After some debugging, this appears to be due to the resulting tensor not being mapped back to the appropriate GPU (or any GPU for that matter). To Reproduce Steps to repr...
Metric on all test data
[ "question" ]
Is there an approach to handle scenarios in which the metric calculated during test_step depends on the entire test set and not just the existing data in the batch?
LR finder broken 2: not sure why (and other tiny bugs)
[ "bug", "help wanted", "won't fix", "trainer: tune", "priority: 2" ]
πŸ› Bug LR finder doesn't seem to work. The model doesn't train when trainer.lr_find(model) is running (the loss metric oscillates around its initial value). When looking at the figure from lr_finder.plot(), I suspected the learning rate wasn't being changed somehow, but internally it does. So I rebuilt a custom LR fin...