title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
Early Stopping Callback not working
[ "help wanted" ]
πŸ› Bug See #2038. Early stopping stopped too early when using Lightning 0.7.7.dev0 (only on a Slurm cluster, not locally, but I might have been using slightly different Lightning versions). After upgrading to current master, early stopping does not work anymore, at all. I had a similar problem because of not having a ...
Suggestion to add the default interval of scheduler in the documentation
[ "help wanted", "good first issue", "docs" ]
πŸ“š Documentation The default interval of scheduler is per epoch. However, it is not explicitly mentioned in the documentation. I have to dig into the code and figure it out. pytorch-lightning/pytorch_lightning/trainer/optimizers.py Line 86 in 0914873 ...
Hydra MLFlow Clash
[ "bug", "help wanted", "good first issue", "logger" ]
πŸ› Bug When using the MLFlow logger with Hydra, because the parameters passed to the LightningModule is a DictConfig, the condition in the logger/base.py is not met. pytorch-lightning/pytorch_lightning/loggers/base.py Line 177 in 8211256 ...
Keyboard Interrupt lunches test but wandblogger kills the process
[ "help wanted", "question", "won't fix" ]
πŸ› Bug I am using WandBLogger and when I have a run that I want to stop manually with a keyboard interrupt, the model correctly stops training and starts executing the Test function. The problem is that at the same time WandBLogger starts uploading the data and when it is done it kills the process, therefore the test f...
No validation dataset
[ "question" ]
I am just starting to use pytorch_lightning. I have one question regarding the validation set. I may or may not have a validation set during my training. How should the validation_step be structured in this case? is it enough do something like this: @pl.data_loader def val_dataloader(self): if has_valset: ...
0.8.0-dev doesn't save hparams.yaml
[ "bug", "help wanted" ]
πŸ› Bug I install pytorch-lightning from master, version 0.8.0-dev, new version doesn't save arguments in hparams.yaml. old version 0.7.6 can save it correctly Environment PyTorch Version (e.g., 1.0): 1.5 OS (e.g., Linux): Linux How you installed PyTorch (conda, pip, source): source Build command you used (if compiling...
custom training with 0.8.0.dev0 gives import error
[ "help wanted", "question" ]
Due to another issue and the advise to upgrade to master, I upgraded to 0.8.0.dev0. Now, the same model and no code changes gives the error: Traceback (most recent call last): File "/home/luca/project/apps/train_siamnet.py", line 3, in <module> from ..models.siamnet import SiameseNet ImportError: attempted rela...
Adding NVIDIA-SMI like information
[ "feature", "help wanted", "good first issue", "let's do it!" ]
πŸš€ Feature Add the GPU usage information during training. Motivation Most of the research is done on HPC. Therefore, if I want to see the GPU RAM and usage of my job, I have to open a secondary screen to run "watch nvidia-smi" or "nvidia-smi dmon". Have this info saved in the logs will help to: See if I have space f...
CUDA error: an illegal memory access was encountered after updating to the latest stable packages
[ "help wanted", "won't fix" ]
Can anyone help with this CUDA error: an illegal memory access was encountered ?? It runs fine for several iterations... πŸ› Bug Traceback (most recent call last): File "train_gpu.py", line 237, in <module> main_local(hparam_trial) File "train_gpu.py", line 141, in main_local trainer.fit(model) File "/s...
0.8.0-dev hydra changed working directory for other GPUs with DDP
[ "help wanted" ]
πŸ› Bug When I moved from 0.7.6 to 0.8.0-dev for DicConfig support for saving model hparams, I found that working directory changed for GPUs in DDP setting. I modify from huggingface. Code sample python: can't open file '/home/joe/summarization/models/bart/outputs/2020-06-05/11-57-25/finetune.py': [Errno 2] No such file...
Add multi GPU tests for torchelastic nightly
[ "help wanted", "ci" ]
Add a test to CircleCI that spawns to torchelastic and verify that raining works and that the training model returned is correct.
[ddp] New ddp implementation doesn't work in notebooks / using scripts
[ "bug", "help wanted", "priority: 0" ]
The using .spawn() to spin off subprocesses ddp in had a few problems: Everything needs to be picklable. It doesn’t work well with num_workers on dataloaders because of spawn fit(model) trains the model in a subprocess, so the original model is not updated. Those are not limitations of lightning, but of pytorch and py...
Compatibility when DataLoader returns multiple batches, for prefetching purposes
[ "question" ]
I'm trying to convert https://github.com/xcmyz/FastSpeech to Pytorch Lightning. The code does something complicated in the DataLoader where batch_size**2 batches are placed on the device, then iterated over, effectively prefetching batch_size batches to use in an inner loop of your typical pytorch training loop (here) ...
Change prefix in learning rate logger from "lr-" to "lr/"
[ "feature", "help wanted", "won't fix" ]
This allows all lr related values to be folded in tensorboard.
CI: run doctests only on GPU machine
[ "feature", "help wanted", "ci" ]
Currently the doctests run on CPU and GPU but we can't include any GPU related doctests, otherwise they would fail on CPU-only machines. I propose to turn off CI doctests for CPU-only machines and only let them run on machines with GPU (i.e. drone). This would allow us to write doctests also for GPU related stuff. @Bor...
check_model_configuration throws error for no val_dataloader even if val_check_percent == 0
[ "won't fix" ]
The check_model_configuration method raises an error when val_dataloader is None even if val_check_percent is 0, due to the line below. pytorch-lightning/pytorch_lightning/trainer/trainer.py Line 1145 in c09317e if self.is_ove...
LR finder broken
[ "bug", "help wanted", "priority: 0" ]
#614 πŸ› Bug To Reproduce Steps to reproduce the behavior: model = TestModel() trainer = pl.Trainer(gpus=1, default_save_path=exp_path, max_epochs=100) def configure_optimizers(self): optim = torch.optim.Adam(self.parameters(), lr=self.lr) sched = torch.optim.lr_scheduler.ReduceLROnPlateau(optim, '...
Which places are .item() to be used in?
[ "question" ]
Hi, I read some of the example code using Lightning and in some places, there is usage of .item() in the *_epoch_end() methods to convert the loss to scalar. Is it needed or not? In PyTorch, I know this is used to prevent memory leaks, but I am unsure if this is done automatically in Lightning behind the scenes? de...
Tensorboard logging by epoch instead of by step
[ "question", "logger" ]
Short question concerning the tensorboard logging: I am using it like this: def training_epoch_end(self, outputs): avg_loss = torch.stack([x['loss'] for x in outputs]).mean() tensorboard_logs = {'train/loss': avg_loss} for name in self.metrics: tensorboard_logs['train/{}'.format(...
verify ddp and ddp_spawn implementation
[ "bug", "help wanted", "priority: 0" ]
OmegaConf save to hparams
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug I have updated to the latest version 0.8.0rc1 and wanted to test out the new OmegaConf support. I can pass a OmegaConf object into my model, although saving to hparams says that OmegaConf is an unsupported type. I maybe doing this wrong, but I followed updated docs which shows this should be supported Traceback...
`save_last` should only keep the most recent checkpoint (along with the top k)
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature After save_last saves a checkpoint, it removes the previous "last" (i.e. latest) checkpoint (i.e. separate from top k). Motivation For example, for someone limited by disk space, a good strategy during training would be to always save the best checkpoint as well as the latest checkpoint to restore from in ca...
How to use metrics classes of 0.8.0
[ "question" ]
❓ Questions and Help 0.8.0 has new Metric class which can auto_reduce in ddp. But no examples of them. Can you give some examples about how to use it?
How do you save a trained model in standard pytorch format?
[ "question" ]
I've been googling how to save the model on it's own so anyone with torch can just load it and start making predictions but I've found it difficult to get documentation on this? My assumption was there would be some way to directly access the underlying pytorch model and just pickle it but I'm unsure how to do this.
Wrong order of arguments to loss function in template example for finetuning
[]
Hi, I observed that the order for the loss function in the example for finetuning is switched. Instead of self.loss(y_true, y_logits) it should be self.loss(y_logits, y_true) Below is the exact location of the error: pytorch-lightning/pl_examples/domain_templates/computer_vision_fine_tuning.py ...
Very slow training on colab with TPU
[ "help wanted", "accelerator: tpu" ]
https://colab.research.google.com/drive/1OxoEcbNVCF5aj_9o0axTnKAh8p5I4Ikw?usp=sharing
Early stopping callback
[ "bug", "help wanted" ]
πŸ› Bug Early stopping does not have the desired effect when creating a custom callback. Even when creating a custom callback with the default values, the training will stop before the early stopping before the conditions are met. To Reproduce Create callback early_stop_callback = EarlyStopping( monitor='val_...
Setting of PYTHONHASHSEED has no effect
[ "help wanted", "question" ]
πŸ› Bug (Previously submitted here: #1939, but I didn't use the correct template, so now I'm resubmitting) In pytorch-lightning/pytorch_lightning/trainer/seed.py Line 32 in 9045b6c os.environ["PYTHONHASHSEED"] = str(seed) ...
tensorboard logger should support remote directories
[ "feature", "help wanted" ]
πŸš€ Feature Tensorboard allows you to write to gc, s3, hdfs, etc by specifying paths with the right prefix e.g. logDir='hdfs://path/to/logs/ However the lightning logger breaks this. see tensorboard.py#L99 Motivation Training often occurs on remote clusters which don't persist the local disk at the time the job ends. T...
The docker image tagged with Pytorch 1.5 and Python 3.8, has Pytorch 1.4 installed and is running Python 3.7
[ "bug", "help wanted" ]
πŸ› Bug The docker image tagged with Pytorch 1.5, eg 0.8.0rc1-py3.8-torch1.5, has torch 1.4 installed in it, as seen via pip list. Also, it is running Python 3.7 instead of Python 3.8, as the tag indicates. To Reproduce Steps to reproduce the behavior: Pull docker image: docker pull pytorchlightning/pytorch_lightning:0...
Save the whole model object
[ "duplicate", "question" ]
Is there anyway of saving a whole model object with PyTorch lightning? e.g. I want something like this: model = load("mypath") prediction = model.predict(x) without needing access to the original Model class. I know how to load things from a checkpoint, but that requires having access to the model class. Is this pos...
torch.no_grad() during validation step
[ "question" ]
Does PyTorch lightning call torch.no_grad() under the hood during a validation step? The documentation here implies it does NOT but I think it definitely should... can someone confirm? https://pytorch-lightning.readthedocs.io/en/stable/new-project.html
Global Gradient calculation is turned off during validation step.
[ "bug", "help wanted" ]
If an error occurs during the validation step, the tradition calculation is turned off for the runtime, you have to either specifically enable it or restart runtime!
How to run algorithms where there isn't a need for dataloaders?
[ "question", "won't fix" ]
What is your question? In on-policy algorithms in reinforcement learning, rollouts are generated on the fly and there is no need for a replay buffer and consequently a dataloader. In these cases, the loss function is calculated according to the current states obtained (generally by using multiple parallel environments)...
TypeError: can't pickle _thread.lock objects
[ "bug", "help wanted" ]
❓ Questions and Help What is your question? Hi, everyone. I run into this problem but I really do not how to solve it. I've been stuck up there for three or four hours. Code This is the error: Traceback (most recent call last): File "/home/jq/PycharmProjects/Unet/Code/Lit_train.py", line 50, in <module> trainer....
Can you make a new progress bar for each epoch?
[ "question" ]
The progress bar is very slick but a big problem with it is that it overwrites itself. For example, if you are at epoch 10, you cannot see what the validation and training losses were for epoch 9. Could the progress bar perhaps be made to work more like in Keras so that you can see the losses of accuracies of previous ...
Reloading Models for use elsewhere?
[ "help wanted", "question", "discussion" ]
What is your question? When we save models with a Checkpoint Callback, we can only load it up by having the original LightningModule that we used to create the model and the checkpoint. Is there some extension of save_checkpoint so that I can save out everything that I would need to reload the module and load the check...
Let's add a `suggested_num_workers()` method?
[ "feature", "good first issue", "design" ]
V1 could be: import subprocess def suggest_num_workers(num_accelerators): num_cpus = multiprocessing.cpu_count() return num_cpus * num_accelerators @PyTorchLightning/core-contributors Any other heuristics you guys use?
OmegaConf/hydra error unsupported `0.8.0rc2`
[ "bug", "help wanted", "priority: 0" ]
Hi, I am using Hydra to setup my configurations variables for my trainer and model parameters. I'm using PL version 0.8.0rc2. When I pass the params to my PL module and set the passed params to self.hparams, I get the following error: ValueError: Unsupported config type of <class 'omegaconf.dictconfig.DictConfig'>. I ...
Batch weight for accumulating gradients
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Support different weights for each batch during batch gradient accumulation. Motivation In many cases not every batch are equal. For example, when the inputs are sequences of tokens of variable length, the mean loss of several batches is not the average across the individual batch losses, since each batch wo...
Is there a callback for before "configure_optimizers" is called?
[ "question", "won't fix" ]
Is there a callback for before configure_optimizers is called? I didn't notice anything in the docs so wondering if such a callback exists, or if running code just prior to configure_optimzers should be handled elsewhere.
[hparams] support haparms and params in the init
[ "feature", "help wanted", "accelerator: tpu" ]
πŸ› Bug Gives TypeError while running on Google Colab TPU To Reproduce Steps to reproduce the behavior: https://colab.research.google.com/drive/1K5i4kXzZCvbq3jc8IHPD_bd_EdUSjFml?usp=sharing running trainer.fit(model) on TPU TypeError Traceback (most recent call last) in () 21 ...
module 'pytorch_lightning' has no attribute 'metrics'
[ "bug", "help wanted" ]
module 'pytorch_lightning' has no attribute 'metrics'. To Reproduce I am using master branch installation pip install git+https://github.com/PytorchLightning/pytorch-lightning.git@master --upgrade pl.metrics.AUROC() throws error. Expected behavior As I see metrics are present on master branch but they are not in init.
Best practices when Module __init__ contains Dataset?
[ "question" ]
❓ Questions and Help What is your question? What are best practices when the Module init contains the Dataset? This is useful when the input and output sizes are derived from the data set, and not hparams. However, Module.load_from_checkpoint(...) fails with the following error: TypeError: __init__() missing 1 required...
How to programmatically determine the checkpoint directory?
[ "question" ]
❓ Questions and Help What is your question? How do you programmatically determine the checkpoint directory? pytorch lighting has automatic support for checkpoints and those checkpoints are stored in lightning_logs/VERSION/checkpoints/epoch=BEST.ckpt. However, I don't know how to programmatically determine what the VERS...
CPU/GPU Template
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug The GPU or CPU template do not run currently on master after changes including the setup hook. python -m pl_examples.basic_examples.gpu_template --gpus 4 --distributed_backend ddp python -m pl_examples.basic_examples.cpu_template CPU Template Error: Traceback (most recent call last): File "/usr/lib/python3.6/...
Precision 16 not transforming inputs to Float16 nor having LSTMs as halfs
[ "bug", "help wanted" ]
πŸ› Bug When using precision 16 to train a model, the LSTM layers are not transformed to accept FP16 and the inputs to the model are FP32 (as mention in the issue #1876). This can be seen with simple modifications of the MNIST model your provide in colab. To Reproduce Execute the following code: import os import pytorc...
model.setup and model.on_fit_start not working
[]
self.model not yet set so self.is_function_implemented('on_fit_start') and self.is_function_implemented('setup') in trainer.fit will both always return false. Resolve by checking if model instead of self has the relevant function, e.g. if callable(getattr(model, f_name, None)): model.f_name() relevant lines: ...
Steps not incremented correctly with accumulate gradients
[ "bug", "help wanted" ]
πŸ› Bug global_step and current_epoch do not match up anymore after more than 1 epoch when setting accumulate gradients greater 1. I think at the end of each epoch optimizer_step (and on_before_zero_grad) is not called in that case. To Reproduce Create a pl.LightningModule that logs current_epoch and global_step in eve...
dataframes passed outside hparams causing issues
[ "bug", "help wanted" ]
πŸ› Bug In 0.8.0 (updating from 0.7.5), using hyperparameters parsed by test-tube causes an exception. In 0.8.0 (updating from 0.7.5), dataframes passed outside hparams errors out To Reproduce import argparse import pandas as pd import pytorch_lightning as pl parser = argparse.ArgumentParser() parser.add_argument('--hp...
Weird DP behaviour on AWS P2 16xlarge with Wandb Logger
[ "question", "logger", "3rd party" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? Wandb Logger seems to not log all steps. AWS P2 16xLarge has 16 K80s on 1 node. I am using dp distrib mode. Code My training steps: def training_step(self, batch, batch_idx): loss, logits, *_ = self(batch) ...
Full batch training
[ "question" ]
❓ Questions and Help For smaller datasets, it makes sense to do full batch training, not minibatch. How do you implement fullbatch training in pytorch lightning, given that train and validation might be different sizes?
Improve Exception Handling
[ "feature", "help wanted", "good first issue", "let's do it!", "priority: 2" ]
πŸš€ Code Quality Improvement I came across this a few times already: try: # something except Exception: # something It is the worst possible way to handle Exceptions. It is better to catch the specific exception or at least log a message. Alternatives None. Sooner or later someone has to deal with this anyway :)
'Trainer' object has no attribute 'proc_rank'
[ "help wanted", "question" ]
πŸ› Bug 1st epoch runs to completion and the above error is thrown in the is_logger() method. To Reproduce AttributeError Traceback (most recent call last) <ipython-input-14-1b9ebf437115> in <module>() 3 trainer = pl.Trainer(**train_params) 4 ----> 5 trainer.fit(model) 8 frame...
Could I convert lightning module to onnx? Thanks!
[ "feature", "let's do it!" ]
πŸš€ Feature pytorch Lightning works very good, but I cannot find any comments or examples to guide my convert to onnx from a pretrained lightning model, doesn't lightning module only use for researching purpose, without support of onnx cross platform?
TensorMetric not updated to cuda device
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug When tensors located on the GPU are passed through a TensorMetric forward method, they are transfered to the CPU by this method. It seems that the 'cpu' device is not updated when training on a gpu. The cpu device seems to be hardcoded here To Reproduce Steps to reproduce the behavior: import torch import torch....
_has_len does not handle NotImplementedError (raised by torchtext)
[ "bug", "help wanted" ]
πŸ› Bug When using torchtext.data.Iterator with a batch_size_fn function the len function raises a NotImplementedError which is not caught by _has_len function. A bug-fix is very simple by just returning False if a NotImplementedError is raised. This is unlikely to have any negative side effects since it corresponds w...
Clarification for Lr Scheduler ReduceLROnPlateau in PL
[ "question" ]
Maybe I missed it in PL documentation; however, when using the learning rate scheduler " ReduceLROnPlateau" appears to take an argument for which metric to monitor during training. Does PL handle passing this argument? If so is there anyway for the user to specify which argument is passed to the scheduler? From Pytorch...
example_input_array dtype
[ "bug", "discussion" ]
Currently assumed that example_input_array dtype to be equal to model dtype. This is not necessarily correct - e.g. if input is a vector of INT. pytorch-lightning/pytorch_lightning/core/memory.py Line 192 in 7dc58bd input_ = a...
Scipy LBFGS with Lightning
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Nico Pinto (p.c.) says that the vanilla Scipy LBFGS optimizer has produced really good results for him, better so than Pytorch's LBFGS implementation. I am trying to use this wrapper, but similar to this issue I get error TypeError: step() missing 1 required positional argument: 'closure' How can I get the ...
Validation turns on midway during training instead of epoch end - Progress Bar
[ "bug", "help wanted" ]
Validation is starting before epoch end. Validation starts during the first training epoch. I've attached a snippet of the command prompt EDIT: Epoch progress bar includes validation progress. Please close the issue
DDP Bug with Model Checkpoint parsing
[ "bug", "help wanted" ]
πŸ› Bug My script works with CPU, single-GPU and dp. I need ddp to do 16 bit training. Also even on a single machine ddp is faster. Here is my ModelCheckpoint code: def setup_model_checkpoint(config): kwargs = config["model_checkpoint_kwargs"] metrics = kwargs.pop("metrics", ["val_loss"]) if isinstance(metri...
Support HTCondor in addition to SLURM
[ "feature", "help wanted", "won't fix" ]
πŸ› Bug New DDP implementation is still not working for me. I am in the situation where I need to send a single job i.e. run a single executable due to the batch system we have on our university GPU cluster. Therefore, rather than calling the training script multiple times with different env variables as suggested in th...
Undefined variable on DDP2 trainer
[ "bug", "help wanted" ]
πŸ› Bug I am trying to run an experiment in multiple gpus, but a single machine. If I understood right, using ddp2 would be faster than dp and would allow me to use the aggregated results in training_step_end and validation_step_end, which ddp doesn't provide. For some reason I am getting this error: Traceback (most re...
How to make inference right
[ "question" ]
Hello everyone. I'm new to pytorch-lightning, but already excited with this framework. It's very convenient to train my models using lightning. Now my usecase is: I have trained my model and want to do inference on my test data and get results (for example, in csv format). I'd like to do my inference pytorch-lightning-...
how to train a network that doesn't require any training data
[ "question" ]
The Wake-Sleep algorithm doesn't require any data during the sleep phase (effectively it generates it's own data). pytorch-lightning, however, appears to require a train_dataloader() method. The only way I have to make pytorch-lightning run at all (for this admitted unusual case) is to specify some dummy dataset in tra...
overfit_batches doesn't work
[ "bug", "help wanted", "priority: 0" ]
When I try to use overfit_batches: https://pytorch-lightning.readthedocs.io/en/latest/debugging.html#make-model-overfit-on-subset-of-data trainer = Trainer(gpus=num_gpus, max_epochs=config.epochs, overfit_batches=0.01, logger=logger) my code fails with: trainer.fit(module) File "/home/andriy/miniconda3/envs/patc...
GPU out of memory error after few batches
[ "question", "won't fix" ]
I am trying to train a complex model involving multiple convolutions. Since my own implementation was very slow (taking ~2 hours for an epoch which increased further after a few epochs), I tried changing my code to incorporate lightning module. With lightning, I'm getting a CUDA OOM after 1/3rd of the total no of batch...
A simple logger for notebooks or repl
[ "feature", "help wanted" ]
πŸš€ Feature A basic logger that can display results in a table in the repl or in a notebook as the network trains and saves the results to a csv file. Motivation A lightweight logger is useful for environments where you might not have port access or for interactive experimentation. Pitch I created a logger at this gist ...
Breaking compatibility with custom datatypes implementing `.to`
[ "bug" ]
πŸš€ Feature Bring back compatibility for custom datatypes in collections implementing .to for transferring data. Motivation I am using Pytorch Lightning together with Pytorch Geometric. Pytorch Geometric implements several custom datatypes and dataloaders which is really useful for geometric deep learning. Everything ...
Logging on slurm stopped working
[ "bug", "help wanted" ]
πŸ› Bug Logging and checkpoint saving stopped working for me when I run experiments via slurm system. I am using log keys in return functions: training_epoch_end/validation_epoch_end. Version 0.7.6 works. To Reproduce Steps to reproduce the behaviour: Define Tensorboard logger Run training using slurm system sbatch ......
Using s3 backed for Mlflow logger fails
[ "feature", "help wanted", "logger" ]
πŸ› Bug MLflow logger support for s3 URI's. It is already supported by MLflow 1.9.0, https://www.mlflow.org/docs/latest/tracking.html#amazon-s3, but passing an s3 URI to the logger's tracking_uri parameter fails. To Reproduce Steps to reproduce the behavior: init a lighting_module and a trainer. init a mlflow_logger wi...
Using `overfit_batches` with multiple expected validation dataloaders can cause problems
[ "feature", "help wanted", "won't fix" ]
As the title says, my validation_epoch_end expects outputs from 2 dataloaders, but when using overfit_batches it only receives 1, this causes my code to crash. The most simple solution I can think of is to include the number_of_validation_dataloaders in the validation_epoch_end method to handle more easily this situati...
Missing training_step outputs in training_epoch_end
[ "bug", "help wanted" ]
bugfix of this issue: #2320
AttributeError: 'LightningDataParallel' object has no attribute 'teardown'
[ "bug", "help wanted" ]
πŸ› Bug To Reproduce Steps to reproduce the behavior: trainer = pytorch_lightning.Trainer( gpus=2, distributed_backend='dp' ) model = BaseModel.load_from_checkpoint(...) trainer.test(model) Traceback (most recent call last): File "run_kitti.py", line 351, in trainer.test(model) File "/opt/conda/lib/python3.7/...
Multi GPU Training: No kernel image is available for execution on the device
[ "bug", "help wanted" ]
I started using PL thinking about the ease of training my model using multiple GPUs. I'm basically using some Transformers from Huggingface. LightningModule import hydra import torch from pytorch_lightning.core.lightning import LightningModule from torchtext import data from transformers import AutoTokenizer from sou...
max_steps does not work if resume_from_checkpoint is specified
[ "bug", "help wanted" ]
πŸ› Bug max_steps does not work if resume_from_checkpoint is specified To Reproduce Steps to reproduce the behavior: Specify max_steps Specify resume_from_checkpoint Code sample import os import torch from torch.nn import functional as F from torch.utils.data import DataLoader from torchvision.datasets import MNIS...
save_hyperparameters incorrect documentation
[ "docs" ]
πŸ“š Documentation The documentation has examples self.save_hyperparameters(['layer_1_dim', 'learning_rate']) This fails as due to type list not being supported as a hyperparameter. However, looking at the source for save_hyperparameters it appears that the correct usage is self.save_hyperparameters('layer_1_dim', 'learn...
Error when importing metrics on Windows without DDP support
[ "feature", "help wanted" ]
πŸ› Bug When loading the new metrics the following AttributeError is raised: AttributeError: module 'torch.distributed' has no attribute 'ReduceOp'. The problem is the use of torch.distributed.ReduceOp in the type hints and some functions of pytorch_lightning.metrics.converters.py. A similar (identical?) issue was discu...
Model validation code is not called
[ "bug", "help wanted" ]
πŸ› Bug My defined methods for validation_step as well as validation_epoch_end do not seem to get called. To Reproduce Just call the provided code sample. Python should show the NotImplementedError. Instead the model completes 'successfully'. Code sample import pytorch_lightning as pl import torch import torch.nn as nn ...
Reproducibility issue.
[ "question" ]
I am a newbee in using pytorch and pytorch-lightning. My code is to classify the "Sequential" MNIST image, where 4 pixels are fed into the LSTM cell for each time step and one image is processed in 28*28/4 time steps. Since I provide a seed by pytorch_lightning.utilities.seed.seed_everything, same result is expected fo...
Problem with loading checkpoint of a model with embeddings
[ "bug", "help wanted" ]
πŸ› Bug Unable to load from checkpoint for model with embeddings Code sample model arch class Model(pl.LightningModule): def __init__(self, emb_szs): super().__init__() m = get_base() self.enc = nn.Sequential(*list(m.children())[:-1], nn.Flatten()) nc = list(m.c...
Trainer.test() returns type error while loading model after upgrading from pl 0.76 to 0.8 and 0.82dev
[ "bug", "help wanted" ]
πŸ› Bug I am running a Transformer model with a custom data set. Everything worked fine with v0.76, except for early stopping. However after upgrading to v0.8+ Trainer.test() would return type error while loading model (expected int but got dict). I have tried two different datasets with the same model which returned sa...
How To: Specify input_shape when it's not known in advance
[ "question", "won't fix" ]
As far as I can see all of the examples assume that the input shape is known in advance, i.e. MNIST images which have fixed C,H and W. But I'm working with multi-variate time series data, superficially investigating transforms which alter the number of input series. In the example below, transforms is passed to the con...
Access the logging directory through LightningModule or Trainer
[ "question" ]
Is there a way to access the current logging directory (e.g., lightning_logs/version_x)? I've searched the documentation and the source code but haven't found a solution yet. I want to save some intermediate raw tensors to that directory. Thanks, David
Incorrect docs in metrics
[ "docs" ]
πŸ“š Documentation Here pytorch-lightning/pytorch_lightning/metrics/functional/classification.py Line 137 in a5f4578 num_classes: Optional[int] = None, the argument is num_classes but in docs it is class_inde...
An Extra argument passed to the class, loaded from load_from_checkpoint.
[ "bug", "help wanted" ]
πŸ› Bug Hello, I was facing few issues while using the trainer.test() function, on debugging I found out that the problem was with the _load_model_state class method which is called by load_from_checkpoint. Code For reference @classmethod def _load_model_state(cls, checkpoint: Dict[str, Any], *args, **kwargs): # ...
How to make LR scheduler verbose?
[ "question", "won't fix" ]
❓ Questions and Help Before asking: search the issues. search the docs. What is your question? Hi, I am currently using the ReduceLROnPlateau scheduler and returning it as a [dictionnary] in the configure_optimizers method. All other options seem to work, but I cannot seem to be able to make the scheduler verbose. A...
cannot unpack non-iterable NoneType object when predicting with test function/dataloader
[ "bug", "help wanted" ]
Hi, When trying to evaluate the autoencoder with test dataset, we got the error: cannot unpack non-iterable NoneType object def training_step(self, train_batch, batch_idx): x, _ = train_batch decoded, encoded = self.forward(x) loss = self.mse(decoded,x) tensorboard_logs = {'train_loss': ...
Will load_from_checkpoint load Huggingface models as well?
[ "question" ]
What is your question? Just wanted to know will using the load_from_checkpoint for a LightningModule load the state_dict for the HuggingFace models as well? Eg: for the given example in the docs, will state_dict be loaded for BertModel.from_pretrained thing as well? Ideally, load_from_checkpoint should load state_dict ...
Logging hyperparams and metrics
[ "feature", "question", "discussion" ]
My question is how do I log both hyperparams and metrics so that tensorboard works "properly". I've copied pytorch_lightning.loggers.TensorBoardLogger into a catboost/hyperopt project, and using the code below after each iteration I get the result I'm after, on the tensorboard HPARAMS page both the hyperparameters and ...
Cluster job that spawns its own processes for use with DDP
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Not sure if the title is appropriate. This feature would support the use case where The manager sets MASTER_ADDR and MASTER_PORT User knows how to set LOCAL_RANK, GLOBAL_RANK, and WORLD_SIZE Each node has N_g GPUs N_j jobs are spawned (in my case, MPI on SLURM) for each gpu, i.e., world_size= N_j * N_g Eac...
save the model/training source code to model checkpoint or logs directory
[ "feature", "help wanted", "won't fix" ]
πŸš€ Feature Save the model/training source code to model checkpoint or logs directory Motivation Now, the hparams has been saved in yaml file. Sometimes, we not only change the hparams but also the network arch, the pre-process flow, so if we save the relate source code to model, we will get all the information to rest...
Cannot Transfer Batch Data to Device
[ "bug", "help wanted", "priority: 0" ]
πŸ› Bug After upgrading from 0.8.1 to 0.8.2, error occurs during training when the data is being transferred to the device. There is no problem with 0.8.1 but only with 0.8.2. To Reproduce It might be a little difficult to share the code here, but I suspect that might due to a mistake of defining "dtype" variable somewh...
TPU MNIST demo hangs in last batch
[ "bug", "help wanted" ]
I am afraid this is not working for me. _Remark : There have been various posts about this or very similar issues, but as far as I can see they have all been closed. Example: #1590 In fact I posted this exact comment in the following issue, when it was already closed. #1403 I am therefore creating this issue, because I...
testing gets stuck when num_workers is set to value >0 in tests/base/model_utilities.py
[ "bug", "help wanted" ]
πŸ› Bug While executing bash .run_local_tests.sh the test hangs frequently (but not always) if parallel data loading is enabled in tests/base/model_utilities.py by setting num_workers to a value larger than 0. If an manual keyboard interrupt (CTRL-c) is done the test continues with a "PASSED" message. This is an issue t...
Imagenet example use num_workers 0
[ "question" ]
https://github.com/PyTorchLightning/pytorch-lightning/blob/master/pl_examples/domain_templates/imagenet.py#L160 Is there a specific reason for that?
validation_epoch_end only gets the outputs from one process
[ "bug" ]
Hi, I need the whole validation set to get the validation result. Current validation_epoch_end only gets the outputs from current process. Can I collect the gather the outputs from different gpus, and then run validation_epoch_end. And also I don't necessary need it to run on all processes, I only need it to run once. ...
Questions about DDP and slurm
[ "won't fix" ]
Hi, When running on slurm with say 8 GPUs on a single node with ddp as backend, each process occupies all GPUs. It only uses small portion (less than 1gb) of non-root GPUs. However, multiplying it by 7 is still a large chuck of memory unused for training. I followed the multi-GPUs and slurm document. Maybe I am missing...
element 0 of tensors does not have a grad_fn
[ "question" ]
❓ Questions and Help What is your question? Hi, I got RuntimeError element 0 of tensors does not have a grad_fn in TEST phase Code def test_step(self, batch, batch_nb): imgs, labels = batch self.last_imgs = imgs # get gradients self.classifier.eval() copied_input = imgs.clone() copied_input.req...