title
stringlengths
5
164
labels
list
bodyText
stringlengths
0
46.7k
Running Average of my accuracy, losses etc.
[ "question" ]
What is your question? I want my tqdm logger to show me a history of my training on the terminal. Right now, when a epoch ends, all data for it is scrubbed from the command line and the new epoch data is shown. Also I want to see the running accuracy of my network and a running average of my loss on the tqdm bar. How s...
dataloader NotImplementedError
[]
I want to use a changed GAN structure from the sample here with random samples of shape(3200,5). When run, NotImplementedError occurs. I dont see where the problem is. Thanks for help. from argparse import ArgumentParser from collections import OrderedDict import torch import torch.nn.functional as F import pytorch_lig...
Early stopping conditioned on metric `val_loss` isn't recognised when setting the val_check_interval
[ "bug" ]
Describe the bug Training stops when setting val_check_interval<1.0 in the Trainer class as it doesn't recognise val_loss. I get the following warning at the end of the 3rd epoch: Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,train_loss To Reproduce Steps to reprod...
IterableDataset breaks 1.1 compatibility
[ "bug" ]
A recently introduced feature unfortunately breaks compability with Pytorch 1.1.0. Describe the bug IterableDataset support, introduced in issue 323, requires Pytorch 1.2.0+. To Reproduce In a python environment with Pytorch 1.1.0 do: import pytorch_lightning Expected behavior Compatibility with Pytorch 1.1.0. I'm fili...
Error message for multiple optimizers
[ "feature", "help wanted" ]
When a user uses multiple optimizers and doesn't add optimizer_index to training_step, the error is super cryptic and not obvious. Let's add this error: you passed in {len(self.optimizers)} but didn't add optimizer_idx to the training_step arguments
Escaping % in add_default_args
[ "bug" ]
Describe the bug In utilities/arg_parse.py, a percentage symbol is not escaped and would cause an error when printing help information. parser.add_argument('--overfit', default=-1, type=float, help='% of dataset to use with this option. float, or -1 for none') To Reproduce Steps to reproduce the...
Minimalize tests
[ "feature", "help wanted" ]
The test time takes very long, we shall adjust tests to be just a tests with minimal running time... See #504 (comment)
Tensorboard Epoch Weird Chart
[ "bug" ]
Describe the bug I am getting these weird graphs in my tensorboard, it worked fine when I was doing model.cuda() manually , but when I shifted to the automated stuff using gpus = 1 and distributed backend = None. I have posted this graph below: My code of trainer and lightning module is as follows: Trainer: """ This f...
Nvidia DALI integration
[ "feature", "help wanted" ]
Is your feature request related to a problem? Please describe. Lightning handles a lot of parallelization and best practices for speed and thus image processing and augmentations often become a bottleneck Describe the solution you'd like Support or even integration for DALI For reference https://devblogs.nvidia.com/fas...
Add resuming from specific checkpoint
[ "feature", "help wanted" ]
Is your feature request related to a problem? Please describe. In current version, there is no way to resume training from a specific checkpoint (not the last checkpoint). Sometimes (very often in my case), one needs experiment training with different hyperparameters (e.g. dropout rate, augmentation) from a specific ch...
calling trainer.test() between epochs has side effects in 0.5.3.2
[ "bug" ]
New user of lightning. First downloaded on Oct 15, and updated today, Nov 15 to 0.5.3.2. Working on Ubuntu 18.04.3lts, pytorch 1.3, python3.6.8m. No virtual environment. I call trainer.test() in on_epoch_end() at intervals during training - this speeds comparisons to other model architectures. This worked perfectly in ...
Is requirement numpy==1.16.4 really needed ?
[]
Hi, While installing ligthning via pip I saw that the numpy requirements was fixed to version 1.16.4: ERROR: pytorch-lightning 0.5.3.2 has requirement numpy==1.16.4, but you'll have numpy 1.17.4 which is incompatible. After a quick scroll through the source code i'm wondering: is there a reason why this requirement ...
when validation_step/end not defined, val_loss still gets logged
[ "bug" ]
Describe the bug If the validation_step/end is not implemented by user, lightning still shows a mysterious validation loss in tqdm. Where does this come from? To Reproduce Steps to reproduce the behavior: Take MNIST example in "basic_examples" folder (current master branch) Uncomment validation_step and validation_end...
gan.py template fails to run with GPU
[ "bug" ]
Common bugs: Tensorboard not showing in Jupyter-notebook see issue 79. PyTorch 1.1.0 vs 1.2.0 support see FAQ Describe the bug When running the gan.py script with the only change in script trainer = pl.Trainer(max_nb_epochs=10, gpus=1, distributed_backend='dp') or with gpus=[0] script fails with error on the loss fun...
Checkpoint gives error
[ "bug" ]
Hi, I wonder if we can only save the best model with lowest validation error, and not save the others checkpoints. I took a look at checkpoint_callback's save_best_only (below), it seems that this is for saving the best at every epoch (because the file name changes at every epoch). So I wonder if we can only save the b...
Checkpoint saving period=10 off by one error
[ "bug" ]
Describe the bug Checkpoint saving period=10 sometimes is off by one: ./checkpoints/_ckpt_epoch_10.ckpt ./checkpoints/_ckpt_epoch_20.ckpt ./checkpoints/_ckpt_epoch_30.ckpt ./checkpoints/_ckpt_epoch_40.ckpt ./checkpoints/_ckpt_epoch_50.ckpt ./checkpoints/_ckpt_epoch_60.ckpt ./checkpoints/_ckpt_epoch_70.ckpt ./checkpoint...
ValueError: bad value(s) in fds_to_keep, when attemping DDP
[ "bug" ]
I can't get DDP working without getting the following error: Traceback (most recent call last): File "train.py", line 86, in <module> main(config) File "train.py", line 41, in main trainer.fit(model) File "/home/ubuntu/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/pytorch_lightning/trainer/traine...
Refactoring?
[ "feature", "help wanted" ]
Thanks for lightning, it's a nice tool. Trainer already has around 30 arguments and the PRs want to make it grow (see #516 and #539 for example). The more features, the more arguments and that's not really scalable IMHO. There are things which could be grouped into objects instead, are you considering refactoring Train...
object has no attribute 'add_scalar'
[ "question" ]
I'm trying to use the default logger to record scalars for tensorboard using add_scalar but I get: self.logger.add_scalar('loss/train_loss', 42, 42) AttributeError: 'TestTubeLogger' object has no attribute 'add_scalar' The docs say TestTubeLogger inherits from SummaryWriter so add_scalar should be ok. Can anyone help? ...
0.5.3 broke DDP vs 0.5.2.1
[ "bug" ]
Using ddp on 0.5.3.2 would cause a gpu worker to crash at an epoch transition (start of epoch 7 for me) and hang the whole training process. I rolled back to 0.5.2.1 (used for my last project) and the issue was gone. Single gpu training works fine on both versions and toggling amp makes no difference. I'm using pytorch...
Full callback handling
[]
Is your feature request related to a problem? Please describe. I started deep learning using fastAI, and despite all its drawbacks there is one thing I found very handy when I wanted to tweek the training loop : callbacks. This is far less important here as we have control over training_step, but I was for instance won...
The gan template does not seem to set properly the gradients of the discriminator to zero
[ "question" ]
In each iteration at the beginning of the discriminator training, the gradient is not set to zero. To investigate, just print the gradients of the discriminator after the line if optimizer_i == 1:. The optimizer.zero_grad() for discriminator is then only called once all the gradients are accumulated, including the ones...
Update docs to be clear on --gpus behaviour.
[ "feature", "help wanted" ]
Final resolution: The resolution then should be alternative 1, since we agree that don't want to get rid of the 'number of gpus' functionality (which was the original proposed aggressive solution). If we detect --gpus 0 with int, a warning should suffice alongside updated docs. Is your feature request related to a pro...
Using print_nan_grads in the Trainer results in an error
[ "bug" ]
Describe the bug When using print_nan_grads=True in the Trainer, I am getting the error below. trainer.fit(lstm_model) File "/Users/anaconda3/envs/snorkel/lib/python3.6/site-packages/pytorch_lightning/trainer/trainer.py", line 364, in fit self.run_pretrain_routine(model) File "/Users/anaconda3/envs/snorkel/lib/python3...
failing tests for Windows (Appveyor)
[ "bug" ]
Common bugs There are several issues related to the Windows path structure To Reproduce Steps to reproduce the behavior: see our CI results - https://ci.appveyor.com/project/Borda/pytorch-lightning/builds/29252138/job/ip4j5poawhphfd7g Expected behavior Fix bugs and enable Appveyor CI for PR check
Support custom weighted loss in gradient accumulation
[ "feature", "help wanted", "won't fix" ]
Is your feature request related to a problem? Please describe. When using triplet loss, there could be some "easy triplets" contributing 0.0 to the batch loss. Currently, pytorch-lighting assumes all samples in a mini-batch do contribute non-zero value to the closure_loss. This should be an issue for online triplet min...
GAN training with Pytorch Lightning is broken.
[ "bug" ]
I was trying to train a DCGAN on my dataset but it wouldn't work in any means until I detach the training logic from Lightning and run the code without it. It was not working when my training logic is in Lightning module. I checked the gan examples in the docs and also multiple optimizer things. After 2 days of headach...
GAN example: Only one backward() call?
[ "feature", "question", "let's do it!" ]
In the PyTorch GAN tutorial https://pytorch.org/tutorials/beginner/dcgan_faces_tutorial.html there are two backward() calls for the discriminator. How do you ensure this with your structure, where backward() gets called after the training step? Best, Alain
Custom callbacks
[ "feature", "help wanted" ]
It would be great if the Trainer could support any custom callbacks that follow the Callback structure.
Why are hparams mandatory in the LightningModule definition?
[ "question" ]
When I don't pass hparams in the LightningModule, it doesn't allow me to load a previously saved model using a checkpoint. Particularly, hparams can't be passed in Jupyter Notebook/Lab, so how to use it in such a usecase (for testing, etc.)? I am able to train a model, checkpoint it, but I can't load it when I restart ...
Early Stopping kicks in at min_epochs + 2 instead of min_epochs
[ "bug" ]
Describe the bug I was working on a fix for #524 and found that early stopping starts to kick in at epoch 3 despite min_epochs = 1. To Reproduce run basic_examples/gpu_template.py and log the callback calls every epoch. Expected behavior When setting min_epochs=n (counting from 1), we should evaluate early stopping at ...
empty_cache calls in training occupy memory on gpu #0
[ "bug" ]
Training on GPU other than gpu #0 allocates a ~500Mb chunk of memory on gpu #0, the memory is totally unused and should not be allocated at all. Debugging shows that initial allocation happens at this line: https://github.com/williamFalcon/pytorch-lightning/blob/2f01c03b38fc16618aa9839d39e0ae5a142c0559/pytorch_lightnin...
Comet PAPI Depreciated
[ "bug" ]
Use of the Comet API logger reports an unecessary depreciation warning relating to the use of comet_ml.papi, rather than the newer comet_ml.api. Example: COMET WARNING: You have imported comet_ml.papi; this interface is deprecated. Please use comet_ml.api instead. For more information, see: https://www.comet.ml/docs/py...
How to share y_hat on a batch with multi optimizers?
[ "question", "won't fix" ]
I tried to implement EdgeConnect with pytorch-lightning, and my implementation is this. But, this code is very slow in training because very heavy generator is called by 2 times. While I want to share y_hat(outputs) on a batch, I have no idea. Fast training implementation which is forcibly is here (this is not elegant ...
Typo in README
[]
In the section https://github.com/williamFalcon/pytorch-lightning#what-does-lightning-control-for-me there is a large jpg. In the section "DATA SETUP" it says "Augmets" instead of "Augments". If that image was generated from LaTeX, or if it was an svg, anybody in the community could edit it.
Cyclic learning rate finder as a part of Trainer
[ "feature", "help wanted" ]
πŸš€ Feature Learning rate finder to plot lr vs loss relationship for Trainer and find a good starting learning rate. Motivation Cyclical Learning Rates for Training Neural Networks by Leslie N. Smith documents how to find a good learning rate for training with CyclicLR scheduler. Pitch Adding a methods to the Trainer cl...
num_training_batches rounds down, causing 0 batches count
[ "bug" ]
πŸ› Bug self.num_training_batches is defined using int here, which rounds it down to 0 when a small training_percent_check or overfit_pct is used, even though at least 1 batch is still processed. This does not cause any errors in "vanilla" lightning, but crashes any user code that uses the number of batches in a divisio...
Trainers' .fit() mimics .test() after first call to .test() + .test() doesn't print metrics
[ "bug", "help wanted" ]
πŸ› Bug After first call to Trainer.test() all subsequent calls to Trainer.fit() exhibit output behavior of Trainer.test() Trainer.test() doesn't print metrics (and returns None) returned by LightningModule.test_end() To Reproduce Run following code in a Python 3.6.8 env with torch=1.3.1 and pytorch_lightning=0.5.3.2 ...
Semantic Segmentation example
[ "feature" ]
πŸš€ Feature Semantic Segmentation example code with dataloading and training implemented. Motivation There are not many examples available for PyTorch Lightning (PL) as of now. A reproducible example illustrating semantic segmentation will be helpful for users to understand how everything works. I had to look around a...
tensorflow version
[ "question" ]
Hi, I am getting the following error and I was wondering what tensorflow version is currently supported since I am using 1.11.0. module 'tensorflow.io' has no attribute 'gfile' Thanks!
Step-wise processing, better support for `IterableDataset`, and others
[ "feature", "help wanted" ]
I have been using PTL for a month. It is nice and saves a lot of time, and I intend to use it in future projects. That said, I have a list of feature requests and improvements that would be very helpful to have to support a wider set of use cases. I am not sure what the best format for this list, so I will just write t...
powersgd
[ "feature", "help wanted", "discussion", "waiting on author" ]
Powersgd paper shows promising scaling for distributed training. https://arxiv.org/abs/1905.13727 I'm interested in porting this to pytorch-lightning, do you think it's a good idea? Thanks
add "no logging" option
[ "feature", "help wanted" ]
I may be wrong, but I see no way to entirely avoid logging during training, which sometimes may be convenient for quick exploratory experiments. I suggest to have trainer = Trainer(logger=None) construct a trainer that does no logging at all
Turn off validation if val_percent_check=0
[ "bug" ]
As was suggested by @williamFalcon in #536 (comment) val_percent_check=0 should turn off the validation loop. But now it will not work because of self.num_val_batches = max(1, self.num_val_batches) So I suggest to fix it. Moreover I suggest to make more thorough processing of train_percent_check and val_check_interva...
What is hparams exactly?
[ "question" ]
Hi, thanks for the nice product again. From #525 and #599, I could guess that hparams is required to load a saved model (which I think should be mentioned somewhere in the doc btw). And from the examples, seems like hparams may be argparse.Namespace. Unfortunately though, it was not so easy to understand the concept. W...
How to save checkpoint when turning off the validation?
[ "feature", "help wanted" ]
Help In some cases like fintuning bert, We don't need the validation step, but have to save the model checkpoint. But I can't make it. If anyone know, please tell me. Thank you!
ValueError: bad value(s) in fds_to_keep when using DDP
[ "bug", "help wanted" ]
πŸ› Bug I see the following error when I try to use ddp for distributed training. I see #538 is the similar issue, but I couldn't figure out how to apply the solution to my problem. The following is the error log. 2019-12-27 22:17:31,780 - __main__ - INFO - Loaded dictionary 2019-12-27 22:17:31,816 - model.dictionary - ...
Extract dataset definition out of the LightningModule
[ "feature", "help wanted" ]
πŸš€ Feature Extract dataset definition out of the LightningModule Motivation Separation of data from the model. Pitch The datasets loaders could easily be passed to the fit method directly instead of having to define them inside the LightningModule, this avoids having a single class that possibly contains: data, data pi...
`max_nb_epochs` not effective in master branch
[ "bug" ]
Hi, I think this might be an easy one to fix. I'm using the bleeding edge version from master with pytorch 1.3. Trainer(max_nb_epochs=...) does not limit the max epochs in training at the moment. See doc I think this is due to the following code and default setting: https://github.com/williamFalcon/pytorch-lightning/bl...
How to log train and validation loss in the same figure ?
[ "question" ]
❓ Questions and Help What is your question? How can we log train and validation loss in the same plot and preview them in tensorboard? Having both in the same plot is useful to identify overfitting visually. Code def training_step(self, batch, batch_idx): images, labels = batch output = self.forward...
Removing particular defaults from progress bar
[ "question" ]
Related to issue #629, since proposes to remove some default entries from the progress bar. Is there an existing way to remove entries from the tqdm_dict once the trainer is initialized?
Correctly using `ReduceLROnPlateau`
[ "question" ]
Hello all, I'm trying to use the learning rate scheduler ReduceLROnPlateau, though I'm not sure I'm implementing this correctly. The scheduler doesn't seem to be working properly. I am essentially using the same code as the Colab MNIST tutorial (I ran this in colab) import os import torch from torch.nn import function...
Requires_grad automatically set to false during training
[]
Hi, First of all I'd like to thank you for the find package, it removes a lot of overhead. I'm currently working on a project using the pytorch lightning module. However during the training procedure, the flag 'self.requires_grad' is set to false even though the 'self.unfreeze()' method has been used before in the forw...
Mismatch of displayed 'epoch'
[ "bug", "good first issue" ]
πŸ› Bug The display of epoch's number mismatches between the progress bar and the checkpoint indicator. I wonder this mismatch could confuse users. progress bar: The number of epochs starts from 1. checkpoint indicator: The number of epochs starts from 0. metrics.csv also starts from 0. I think that to change checkpoi...
Multi-GPU on AWS p2.8xlarge instance (ddp2 and ddp)
[ "bug" ]
As information, AWS p2.8xlarge has 8 K80s all on the same node. I have tried my model gpus=1 and distributed_backend=None on an AWS p2.xlarge instance (1 K80) and it works. When I try gpus=8 and distributed_backend='ddp2' on an AWS p2.8xlarge, I get the following error: File "/usr/local/lib/python3.6/dist-packages/py...
Multi-GPU (dp) on AWS p2.8xlarge instance
[ "bug" ]
I don't think the AWS instance is the problem, since the model dies on the first forward pass. Here is the error:  16:17:51 Traceback (most recent call last):  16:17:51 File "/Siamese_BERT_blogpost/train.py", line 107, in <module>  16:17:51 trainer.fit(model)  16:17:51 File "/usr/local/lib/python3.6/dist-packages/p...
Checkpoint saving isn't atomic
[ "bug" ]
πŸ› Bug Saving checkpoints happens non-atomically. In some cases, this causes an incomplete write of a checkpoint (for example when receiving a SIGKILL during writing), causing any subsequent loading to fail with RuntimeError: unexpected EOF, expected 8 more bytes. The file might be corrupted. To Reproduce This is diffi...
How to make test_end() return metrics
[ "question" ]
I have searched through the docs / Google as well as looked through the source code. It seems like test_end() returns nothing (it has no return in the function). I was wondering if I was missing something really obvious. I would simply like to return the metrics of the test end.
drop Pandas dependency
[ "feature", "help wanted", "good first issue" ]
πŸš€ Feature replace the few Pandas usage by native CSV package Motivation #687 (comment)
can i run multiple ddp jobs on single node
[ "bug", "feature" ]
I am running on a 14 core, 7 gpu machine. Ubuntu 18.04.2LTS, python 3.6.8, lightning 0.5.3.2, no virtual environment, no SLURM. I have moved a tried and true model to ddp. It works great in all scenarios, including ddp as a single invocation. I cannot succesfully start a second one, unfortunately. I get the following f...
Documentation was disappeared
[]
The documentation on github.io was disappeared for a few days. Is it moved to some other place? I really need it to continue my works. Thanks a lot
Fitting with log_gpu_memory=True fails in python3.6.
[ "bug" ]
Bug Fitting with log_gpu_memory=True in the Trainer fails in python3.6 version. To Reproduce Use python3.6 version Create any trainer with log_gpu_memory=True option. Then fit it. See error: /a/pytorch-lightning/pytorch_lightning/core/memory.py in get_gpu_memory_map() 237 encoding='utf-8', 238 ...
LR Schedulers shouldn't get `epoch` argument in `step` function
[ "bug", "good first issue" ]
πŸ› Bug PyTorch LR schedulers now shouldn't get any arguments in step function, see here and here. Looks like the calls in PytorchLightning are not in line with the new interface, see here. This results in unexpected LR changes. Removing the epoch argument from step call solves the issue for me. Environment PyTorch 1.4 ...
logging basic configuration level: INFO vs. WARNING (usability with W&B)
[ "good first issue", "logger" ]
Thanks for the amazing package! I am having a great time using it. Issue Recently, I have been playing around with the weights and biases (W&B) logging functionality and I noticed that I was getting a lot of logging messages in my jupyter notebook while training my model (every epoch I got new messages). When I looked ...
W&B: Allow for passing experiment into the WandbLogger (and logging semantics)
[ "logger" ]
Currently, the WandbLogger will automatically create a new internal experiment (run) whenever you create a new WandbLogger. Issue If I instantiate a wandb experiment outside of the logger, then I will have two experiments when I train my model since there is no way to set the internal experiment of the WandbLogger to m...
Trainer is setting parameters with requires_grad=False to requires_grad=True (bug)
[ "bug" ]
πŸ› Bug When training a model that has some parameters where requires_grad=False, the Trainer is actually setting requires_grad=True for these parameters and changing them. The bug appears to originate in the TrainerTrainLoopMixin code. To Reproduce Steps to reproduce the behavior: Create a model with some parameters...
convert examples to doctests
[ "feature", "help wanted", "good first issue", "docs" ]
πŸš€ Feature Converting examples to doctests... Motivation The examples now are static so there is no guarantee that they are still valid... Advantages of converting to doctest would: increase reproducibility each example can run as a stand-alone make testing on smaller units smaller test units simplify debugging Addit...
Tqdm progress bar error
[ "bug", "duplicate", "help wanted" ]
When running one epoch with train and val dataloader, as soon as validation is started the progressbar will create a new line for each iteration. I have this bug in pycharm as well as kaggle kernels. Below a typical example. 80% runs smoothly, as soon as validation starts a new line for each tqdm iteration is started ...
Modify hook on_batch_start() API to support other iterable dataloaders
[ "feature", "discussion" ]
Motivation By default LightningModule.train_dataloader() and etc return a PyTorch DataLoader, but it can be easily extended to other iterable objects by converting to vanilla batch in on_batch_start() Pitch def on_batch_start(self, batch): # do something # before return response # after return batc...
Better way to set retain_graph
[ "question" ]
Is there a better way to set retain_graph, especially when using two optimizers? I have read the issue #356 and the corresponding fix, to set it by overriding the backward function. However, this becomes messy especially when more than 1 optimizers are used, as the function doesn't have the optimizer_idx as an argumen...
ModelCheckpoint Filepath Doesn't Use Logger Save Dir
[ "bug", "help wanted", "good first issue" ]
πŸ› Bug Not sure if this is intended, but the model checkpoint isn't using the same directory as the logger, even if the logger exists. I would have expected this line here to be self.logger.save_dir instead of self.default_save_path. Thank you, -Collin
Doc broken / Link broken
[ "good first issue" ]
The readme link "Lightning module" is broken: https://pytorch-lightning.readthedocs.io/en/latest/LightningModule/RequiredTrainerInterface/ The source link in documentation goes nowhere: https://pytorch-lightning.readthedocs.io/en/latest/logging.html click on source goes to https://github.com/PyTorchLightning/PyTorch-Li...
Incompatible torch and torchvision version numbers in requirements file
[ "bug" ]
πŸ› Bug requirements.txt lists torch and torchvision requirements as follows torch>=1.1 torchvision>=0.4.0 which leads to pip installing torchvision 0.4.2 and torch 1.4.0. These are incompatible with each other. Running pip -r requirements.txt throws the following error ERROR: torchvision 0.4.2 has requirement torch==...
TensorBoardLogger creates another tfevents file.
[ "bug", "help wanted" ]
πŸ› Bug TensorBoardLogger creates another tfevents file when fit() is running. It seems that no metrics are logged in the redundant file, but it will be shown in TensorBoard as a run. I don't do anything about loggers in my LightningModules. Expected file structure: | |- save_dir | |- name | |- version_0 | ...
How to save model weights to mlflow tracking server while using MLFLogger to save metrics.
[ "question" ]
❓ Questions and Help Anybody knows simple way to accomplish it? Before asking: search the issues. search the docs. What is your question? I'm searching for a way to save model weights to mlflow tracking server while using MLFLogger to save metrics. my problem is, I cannot find a way to save model weight to same run ...
trainer.test() fails when using ddp
[ "bug" ]
Calling 'trainer.fit()' following ddp training fails giving this error: File "/home/seth/.local/lib/python3.6/site-packages/pytorch_lightning/trainer/model_hooks.py", line 19, in is_overriden is_overriden = getattr(model, f_name).__code__ is not getattr(super_object, f_name).__code__ AttributeError: 'NoneType' ob...
Fit Error: raised training_step() takes 3 positional arguments but 4 were given when I use truncated_bptt_steps
[ "bug" ]
πŸ› Bug Everything works fine when I had truncated_bptt_steps as None, but when I set it to 5. The error mentioned in the title is thrown (see below for Traceback detail): Traceback (most recent call last): File "Trainer.py", line 103, in <module> trainer.fit(gen) File "xxxx/anaconda3/lib/python3.7/site-packages...
Allow a flag into trainer to save checkpoints at partial epochs
[ "feature", "help wanted" ]
πŸš€ Feature Allow a flag into trainer to save checkpoints at partial epochs Motivation When you have a large dataset that takes tens of hours per epoch, it's important to have checkpoints along the way. Right now we only get a checkpoint on_epoch_end. Workaround Also interested to see if there is a good workaround. I gu...
Why isn't EarlyStopping called after every validation_end unlike ModelCheckpoint?
[ "question" ]
❓ Questions and Help Hi, I was wondering why the EarlyStopping Callback is not called after every validation_end unlike the ModelCheckpoint Callback? I use val_check_interval < 1 and my model overfits quite fast on the train data, so it would be handy to stop even during an epoch. What have you tried? I tried to replac...
Test metrics not logging to Comet after training
[ "bug" ]
πŸ› Bug When testing a model with Trainer.test metrics are not logged to Comet if the model was previously trained using Trainer.fit. While training metrics are logged correctly. Code sample comet_logger = CometLogger() trainer = Trainer(logger=comet_logger) model = get_model() trainer.fit(model) # Metr...
run_evaluation() does not work.
[ "bug", "good first issue" ]
πŸ› Bug run_evaluation() does not work. Suspected that model is not loaded into the trainer at any point. To Reproduce Steps to reproduce the behavior: Run ImageNet example with --evaluate argument. python imagenet_example.py --evaluate Expected behavior Model is supposed to load from the checkpoint directory and eval...
wandb logging does not work - log method called on the wrong object (?)
[ "bug", "good first issue" ]
πŸ› Bug When using the WandbLogger and not providing an experiment, I get an AttributeError: 'Run' object has no attribute 'log' in line 84 of WandbLogger. Instead on experiment, I think log should be called on wandb Code sample wandb_logger = WandbLogger(name="name", save_dir="/path/to/folder", offline=False, project=...
Improve tqdm progress bar
[ "feature", "help wanted", "good first issue" ]
At the moment the progress bar is initialized with the arg leave=False: pytorch-lightning/pytorch_lightning/trainer/trainer.py Line 861 in deffbab eval_results = self.evaluate(model, self.get_val_dataloaders(), ...
logging module collision
[ "bug" ]
Logging module collides with the Python one: import pytorch_lightning as pl dir(pl.logging) It gives you the python logging module attributes instead of the pytorch_lightning ones. This is probably due to this pytorch-lightning/pytorch_lightning/__init__.py Line 31 in deffb...
Lightning DDP seems to be breaking autograd
[ "bug" ]
πŸ› Bug I am attempting to make a lightning script for the example code in https://github.com/lucidrains/reformer-pytorch I now have 2 scripts, 1 is a lightning implementation and one is a regular apex DDP implementation, The regular apex DDP implementation is able to train completly fine, but lightning throws an erro...
Model Parallel
[ "question" ]
Hi - I'm interested to start using your project, and I'm wondering if you support Model Parallel, so that I can train models that do not fit on a single card? If this is already supported, could you please point me to an example? Or do you have any ideas on how to set this up manually, if it is not explicitly supported...
Checkpoint naming broken
[ "bug", "help wanted" ]
πŸ› Bug I would like to be able to save checkpoints with custom names that include the value of my val_loss, ie. path/epoch_2-val_loss_0.2.hdf5 . The documentation for ModelCheckpoint suggests that this is possible using the filepath argument. This does not appear to be the case, since the source code calls os.mkdirs(fi...
Expand badges for tests
[ "feature", "help wanted" ]
We do a lot of tests already. Let's have a badge for each thing we test. We can use a 2d matrix. On the left, the pytorch versions. On top, the PyThon versions?
TypeError: validation_step() takes 3 positional arguments but 4 were given
[ "bug" ]
Hi, I just started using lightning and I've been running into this bug lately. Here's my code for the validation_step and validation_end: def validation_step(self, batch, batch_idx): imgs = batch (z1, z2) = torch.split(imgs['X'], 1, 1) ct = imgs['CT'] y_hat = self.discriminator(self(z1...
Trainer got an unexpected keyword argument 'save_best_only'
[]
When I tried to run the following code: import os import argparse import torch import torch.nn as nn from torch.utils.data import DataLoader from torchvision.datasets import MNIST import torchvision.transforms as transforms import pytorch_lightning as pl from pytorch_lightning import Trainer def parse_args(): ...
Ability to specify step for logged metrics
[ "feature", "help wanted" ]
πŸš€ Feature Add option to specify step for logged metrics Motivation After calculating some metric on n-th epoch I'd like to put corresponding mark (n) on x-axis. As I see in the source code, there's no obvious way to do this. Instead something like n * num_batches will be shown, like in the figure where the loss is cal...
Best way to use mixup in lightning?
[ "question" ]
Just wondering what the best way to implement mixup in lightning is, possibly in the dataset?
Release Pytorch Lightning as a conda package
[ "feature", "help wanted" ]
πŸš€ Feature Please make the pytorch-lightning package available from the conda package manager. This would probably be done through conda-forge: conda install pytorch-lightning -c conda-forge Motivation The default way of installing Pytorch is via their conda channel, so (probably) most users of Lightning already use ...
new profiler has failing tests
[ "bug" ]
@jeremyjordan Tests fail on OSX tests/test_profiler.py::test_advanced_profiler FAILED
Customizable TensorBoard Graphics at Test Run
[ "feature", "help wanted" ]
πŸš€ Feature Customizable TensorBoard graphics may be helpful especially at the test run. Motivation One may need to add customized graphics like pictures or grads to tensorboard. Especially for the test run, adding one scalar is just not enough. Exposure of tensorboard writer may be helpful. (Or even better, use-ready f...
Enable stepwise processing flag for schedulers
[ "feature", "help wanted" ]
πŸš€ Feature Asking if it makes sense adding a flag in the Trainer class for calling scheduler.step() after every update (per #640). Motivation This makes sense for training NLP models such as BERT/XLNet or any other that update the lr based on the current step (and training defined in terms of steps instead of epochs) i...
Using other libraries with pytorch-lightning.
[ "question" ]
I'm just wondering, would it be possible to use the AllenNLP/Texar-PyTorch models and data processing submodule as part of PyTorch-Lightning Trainer? Do you think using the class structure setting and the GPU training setting of PyTorch Lightning would be adaptable to AllenNLP modules? I saw that torchtext data handlin...
EarlyStopping when using an IterableDataset
[ "feature", "help wanted", "good first issue" ]
I am trying to use an IterableDataset while also including early stopping. Using version 0.6.1, it looks like the early stopping callback is only checked after an epoch. When using an IterableDataset, I don't see how this is ever called. I have implemented a quick solution on my local machine after noticing a metric ch...
Add deepspeed support
[ "feature", "help wanted" ]
Let's support this! https://github.com/microsoft/DeepSpeed
Enable anchor links in docs
[ "docs" ]
Our docs don't have these links. We need to enable them.