title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
SacreBLEU update | With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error.
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER'
this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries... | https://github.com/huggingface/datasets/issues/2737 | [
"Hi @devrimcavusoglu, \r\nI tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing:\r\n```\r\nsacrebleu = datasets.load_metric('sacrebleu')\r\npredictions = [\"It is a guide to action which ensures that the military always obeys the commands of t... | null | 2,737 | false |
Add Microsoft Building Footprints dataset | ## Adding a Dataset
- **Name:** Microsoft Building Footprints
- **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge.
- *... | https://github.com/huggingface/datasets/issues/2736 | [
"Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!"
] | null | 2,736 | false |
Add Open Buildings dataset | ## Adding a Dataset
- **Name:** Open Buildings
- **Description:** A dataset of building footprints to support social good applications.
Building footprints are useful for a range of important applications, from population estimation, urban planning and humanitarian response, to environmental and climate science.... | https://github.com/huggingface/datasets/issues/2735 | [] | null | 2,735 | false |
Update BibTeX entry | Update BibTeX entry. | https://github.com/huggingface/datasets/pull/2734 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2734",
"html_url": "https://github.com/huggingface/datasets/pull/2734",
"diff_url": "https://github.com/huggingface/datasets/pull/2734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2734.patch",
"merged_at": "2021-07-30T15:47... | 2,734 | true |
Add missing parquet known extension | This code was failing because the parquet extension wasn't recognized:
```python
from datasets import load_dataset
base_url = "https://storage.googleapis.com/huggingface-nlp/cache/datasets/wikipedia/20200501.en/1.0.0/"
data_files = {"train": base_url + "wikipedia-train.parquet"}
wiki = load_dataset("parquet", da... | https://github.com/huggingface/datasets/pull/2733 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2733",
"html_url": "https://github.com/huggingface/datasets/pull/2733",
"diff_url": "https://github.com/huggingface/datasets/pull/2733.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2733.patch",
"merged_at": "2021-07-30T13:24... | 2,733 | true |
Updated TTC4900 Dataset | - The source address of the TTC4900 dataset of [@savasy](https://github.com/savasy) has been updated for direct download.
- Updated readme. | https://github.com/huggingface/datasets/pull/2732 | [
"@lhoestq, lütfen bu PR'ı gözden geçirebilir misiniz?",
"> Thanks ! This looks all good now :)\r\n\r\nThanks"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2732",
"html_url": "https://github.com/huggingface/datasets/pull/2732",
"diff_url": "https://github.com/huggingface/datasets/pull/2732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2732.patch",
"merged_at": "2021-07-30T15:58... | 2,732 | true |
Adding to_tf_dataset method | Oh my **god** do not merge this yet, it's just a draft.
I've added a method (via a mixin) to the `arrow_dataset.Dataset` class that automatically converts our Dataset classes to TF Dataset classes ready for training. It hopefully has most of the features we want, including streaming from disk (no need to load the wh... | https://github.com/huggingface/datasets/pull/2731 | [
"This seems to be working reasonably well in testing, and performance is way better. `tf.py_function` has been dropped for an input generator, but I moved as much of the code as possible outside the generator to allow TF to compile it correctly. I also avoid `tf.RaggedTensor` at all costs, and do the shuffle in the... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2731",
"html_url": "https://github.com/huggingface/datasets/pull/2731",
"diff_url": "https://github.com/huggingface/datasets/pull/2731.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2731.patch",
"merged_at": "2021-09-16T13:50... | 2,731 | true |
Update CommonVoice with new release | ## Adding a Dataset
- **Name:** CommonVoice mid-2021 release
- **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8... | https://github.com/huggingface/datasets/issues/2730 | [
"cc @patrickvonplaten?",
"Does anybody know if there is a bundled link, which would allow direct data download instead of manual? \r\nSomething similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj \r\n",
"Also see... | null | 2,730 | false |
Fix IndexError while loading Arabic Billion Words dataset | Catch `IndexError` and ignore that record.
Close #2727. | https://github.com/huggingface/datasets/pull/2729 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2729",
"html_url": "https://github.com/huggingface/datasets/pull/2729",
"diff_url": "https://github.com/huggingface/datasets/pull/2729.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2729.patch",
"merged_at": "2021-07-30T13:03... | 2,729 | true |
Concurrent use of same dataset (already downloaded) | ## Describe the bug
When launching several jobs at the same time loading the same dataset trigger some errors see (last comments).
## Steps to reproduce the bug
export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets
for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ... | https://github.com/huggingface/datasets/issues/2728 | [
"Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.",
"If i have two jobs that use the same dataset. I got :\r\n\r\n\r\n File \"compute_measures.py\", line 181, in <module>\r\n train_loader, val_loade... | null | 2,728 | false |
Error in loading the Arabic Billion Words Corpus | ## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_words", "Almustaqbal")
```
## Expected results
Th... | https://github.com/huggingface/datasets/issues/2727 | [
"I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\... | null | 2,727 | false |
Typo fix `tokenize_exemple` | There is a small typo in the main README.md | https://github.com/huggingface/datasets/pull/2726 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2726",
"html_url": "https://github.com/huggingface/datasets/pull/2726",
"diff_url": "https://github.com/huggingface/datasets/pull/2726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2726.patch",
"merged_at": "2021-07-29T12:00... | 2,726 | true |
Pass use_auth_token to request_etags | Fix #2724. | https://github.com/huggingface/datasets/pull/2725 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2725",
"html_url": "https://github.com/huggingface/datasets/pull/2725",
"diff_url": "https://github.com/huggingface/datasets/pull/2725.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2725.patch",
"merged_at": "2021-07-28T16:38... | 2,725 | true |
404 Error when loading remote data files from private repo | ## Describe the bug
When loading remote data files from a private repo, a 404 error is raised.
## Steps to reproduce the bug
```python
url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset")
dset = load_dataset("json", data_files=url, use_auth_token=True)
# HTTPError: 404 Client Error: Not... | https://github.com/huggingface/datasets/issues/2724 | [
"I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here:\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160",
"Yes, I remember having properly implemented that: \r... | null | 2,724 | false |
Fix en subset by modifying dataset_info with correct validation infos | - Related to: #2682
We correct the values of `en` subset concerning the expected validation values (both `num_bytes` and `num_examples`.
Instead of having:
`{"name": "validation", "num_bytes": 828589180707, "num_examples": 364868892, "dataset_name": "c4"}`
We replace with correct values:
`{"name": "vali... | https://github.com/huggingface/datasets/pull/2723 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2723",
"html_url": "https://github.com/huggingface/datasets/pull/2723",
"diff_url": "https://github.com/huggingface/datasets/pull/2723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2723.patch",
"merged_at": "2021-07-28T15:22... | 2,723 | true |
Missing cache file | Strangely missing cache file after I restart my program again.
`glue_dataset = datasets.load_dataset('glue', 'sst2')`
`FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json... | https://github.com/huggingface/datasets/issues/2722 | [
"This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.",
"Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset"
] | null | 2,722 | false |
Deal with the bad check in test_load.py | This PR removes a check that's been added in #2684. My intention with this check was to capture an URL in the error message, but instead, it captures a substring of the previous regex match in the test function. Another option would be to replace this check with:
```python
m_paths = re.findall(r"\S*_dummy/_dummy.py\b... | https://github.com/huggingface/datasets/pull/2721 | [
"Hi ! I did a change for this test already in #2662 :\r\n\r\nhttps://github.com/huggingface/datasets/blob/00686c46b7aaf6bfcd4102cec300a3c031284a5a/tests/test_load.py#L312-L316\r\n\r\n(though I have to change the variable name `m_combined_path` to `m_url` or something)\r\n\r\nI guess it's ok to remove this check for... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2721",
"html_url": "https://github.com/huggingface/datasets/pull/2721",
"diff_url": "https://github.com/huggingface/datasets/pull/2721.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2721.patch",
"merged_at": "2021-07-28T08:53... | 2,721 | true |
fix: 🐛 fix two typos | https://github.com/huggingface/datasets/pull/2720 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2720",
"html_url": "https://github.com/huggingface/datasets/pull/2720",
"diff_url": "https://github.com/huggingface/datasets/pull/2720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2720.patch",
"merged_at": "2021-07-27T18:38... | 2,720 | true | |
Use ETag in streaming mode to detect resource updates | **Is your feature request related to a problem? Please describe.**
I want to cache data I generate from processing a dataset I've loaded in streaming mode, but I've currently no way to know if the remote data has been updated or not, thus I don't know when to invalidate my cache.
**Describe the solution you'd lik... | https://github.com/huggingface/datasets/issues/2719 | [] | null | 2,719 | false |
New documentation structure | Organize Datasets documentation into four documentation types to improve clarity and discoverability of content.
**Content to add in the very short term (feel free to add anything I'm missing):**
- A discussion on why Datasets uses Arrow that includes some context and background about why we use Arrow. Would also b... | https://github.com/huggingface/datasets/pull/2718 | [
"I just did some minor changes + added some content in these sections: share, about arrow, about cache\r\n\r\nFeel free to mark this PR as ready for review ! :)",
"I just separated the `Share` How-to page into three pages: share, dataset_script and dataset_card.\r\n\r\nThis way in the share page we can explain in... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2718",
"html_url": "https://github.com/huggingface/datasets/pull/2718",
"diff_url": "https://github.com/huggingface/datasets/pull/2718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2718.patch",
"merged_at": "2021-09-13T17:20... | 2,718 | true |
Fix shuffle on IterableDataset that disables batching in case any functions were mapped | Made a very minor change to fix the issue#2716. Added the missing argument in the constructor call.
As discussed in the bug report, the change is made to prevent the `shuffle` method call from resetting the value of `batched` attribute in `MappedExamplesIterable`
Fix #2716. | https://github.com/huggingface/datasets/pull/2717 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2717",
"html_url": "https://github.com/huggingface/datasets/pull/2717",
"diff_url": "https://github.com/huggingface/datasets/pull/2717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2717.patch",
"merged_at": "2021-07-26T16:30... | 2,717 | true |
Calling shuffle on IterableDataset will disable batching in case any functions were mapped | When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False`
I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h... | https://github.com/huggingface/datasets/issues/2716 | [
"Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)",
"Have raised the PR [here](https://github.com/huggingface/datasets/pull/2717)",
"Fixed by #2717."
] | null | 2,716 | false |
Update PAN-X data URL in XTREME dataset | Related to #2710, #2691. | https://github.com/huggingface/datasets/pull/2715 | [
"Merging since the CI is just about missing infos in the dataset card"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2715",
"html_url": "https://github.com/huggingface/datasets/pull/2715",
"diff_url": "https://github.com/huggingface/datasets/pull/2715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2715.patch",
"merged_at": "2021-07-26T13:27... | 2,715 | true |
add more precise information for size | For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets. | https://github.com/huggingface/datasets/issues/2714 | [
"We already have this information in the dataset_infos.json files of each dataset.\r\nMaybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets\r\n\r\nFor now if you want to access this info you have to load the json for each dataset. For example:\r\n- fo... | null | 2,714 | false |
Enumerate all ner_tags values in WNUT 17 dataset | This PR does:
- Enumerate all ner_tags in dataset card Data Fields section
- Add all metadata tags to dataset card
Close #2709. | https://github.com/huggingface/datasets/pull/2713 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2713",
"html_url": "https://github.com/huggingface/datasets/pull/2713",
"diff_url": "https://github.com/huggingface/datasets/pull/2713.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2713.patch",
"merged_at": "2021-07-26T09:30... | 2,713 | true |
Update WikiANN data URL | WikiANN data source URL is no longer accessible: 404 error from Dropbox.
We have decided to host it at Hugging Face. This PR updates the data source URL, the metadata JSON file and the dataset card.
Close #2691. | https://github.com/huggingface/datasets/pull/2710 | [
"We have to update the URL in the XTREME benchmark as well:\r\n\r\nhttps://github.com/huggingface/datasets/blob/0dfc639cec450ed8762a997789a2ed63e63cdcf2/datasets/xtreme/xtreme.py#L411-L411\r\n\r\n"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2710",
"html_url": "https://github.com/huggingface/datasets/pull/2710",
"diff_url": "https://github.com/huggingface/datasets/pull/2710.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2710.patch",
"merged_at": "2021-07-26T09:34... | 2,710 | true |
Missing documentation for wnut_17 (ner_tags) | On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases:
`ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).`
... | https://github.com/huggingface/datasets/issues/2709 | [
"Hi @maxpel, thanks for reporting this issue.\r\n\r\nIndeed, the documentation in the dataset card is not complete. I’m opening a Pull Request to fix it.\r\n\r\nAs the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and ... | null | 2,709 | false |
QASC: incomplete training set | ## Describe the bug
The training instances are not loaded properly.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("qasc", script_version='1.10.2')
def load_instances(split):
instances = dataset[split]
print(f"split: {split} - size: {len(instanc... | https://github.com/huggingface/datasets/issues/2708 | [
"Hi @danyaljj, thanks for reporting.\r\n\r\nUnfortunately, I have not been able to reproduce your problem. My train split has 8134 examples:\r\n```ipython\r\nIn [10]: ds[\"train\"]\r\nOut[10]:\r\nDataset({\r\n features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_quest... | null | 2,708 | false |
404 Not Found Error when loading LAMA dataset | The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download:
Steps to Reproduce:
1. `from datasets import load_dataset`
2. `dataset = load_dataset('lama', 'trex')`.
Results:
`FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ... | https://github.com/huggingface/datasets/issues/2707 | [
"Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :)",
"Hi @dwil2444, thanks for reporting.\r\n\r\nCould you please confirm which `datasets` version you were using and if the problem persists after you update it to the ... | null | 2,707 | false |
Update BibTeX entry | Update BibTeX entry. | https://github.com/huggingface/datasets/pull/2706 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2706",
"html_url": "https://github.com/huggingface/datasets/pull/2706",
"diff_url": "https://github.com/huggingface/datasets/pull/2706.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2706.patch",
"merged_at": "2021-07-22T12:43... | 2,706 | true |
404 not found error on loading WIKIANN dataset | ## Describe the bug
Unable to retreive wikiann English dataset
## Steps to reproduce the bug
```python
from datasets import list_datasets, load_dataset, list_metrics, load_metric
WIKIANN = load_dataset("wikiann","en")
```
## Expected results
Colab notebook should display successful download status
## Act... | https://github.com/huggingface/datasets/issues/2705 | [
"Hi @ronbutan, thanks for reporting.\r\n\r\nYou are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working.\r\n\r\nWe have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contac... | null | 2,705 | false |
Fix pick default config name message | The error message to tell which config name to load is not displayed.
This is because in the code it was considering the config kwargs to be non-empty, which is a special case for custom configs created on the fly. It appears after this change: https://github.com/huggingface/datasets/pull/2659
I fixed that by ma... | https://github.com/huggingface/datasets/pull/2704 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2704",
"html_url": "https://github.com/huggingface/datasets/pull/2704",
"diff_url": "https://github.com/huggingface/datasets/pull/2704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2704.patch",
"merged_at": "2021-07-22T10:02... | 2,704 | true |
Bad message when config name is missing | When loading a dataset that have several configurations, we expect to see an error message if the user doesn't specify a config name.
However in `datasets` 1.10.0 and 1.10.1 it doesn't show the right message:
```python
import datasets
datasets.load_dataset("glue")
```
raises
```python
AttributeError: 'Bui... | https://github.com/huggingface/datasets/issues/2703 | [] | null | 2,703 | false |
Update BibTeX entry | Update BibTeX entry. | https://github.com/huggingface/datasets/pull/2702 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2702",
"html_url": "https://github.com/huggingface/datasets/pull/2702",
"diff_url": "https://github.com/huggingface/datasets/pull/2702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2702.patch",
"merged_at": "2021-07-22T09:17... | 2,702 | true |
Fix download_mode docstrings | Fix `download_mode` docstrings. | https://github.com/huggingface/datasets/pull/2701 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2701",
"html_url": "https://github.com/huggingface/datasets/pull/2701",
"diff_url": "https://github.com/huggingface/datasets/pull/2701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2701.patch",
"merged_at": "2021-07-22T09:33... | 2,701 | true |
from datasets import Dataset is failing | ## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
from datasets import Dataset
```
## Expected results
A clear and concise description of the expected results.
## Actual results
Specify the actual results or... | https://github.com/huggingface/datasets/issues/2700 | [
"Hi @kswamy15, thanks for reporting.\r\n\r\nWe are fixing this critical issue and making an urgent patch release of the `datasets` library today.\r\n\r\nIn the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`"
] | null | 2,700 | false |
cannot combine splits merging and streaming? | this does not work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)`
with error:
`ValueError: Bad split: train+validation. Available splits: ['train', 'validation']`
these work:
`dataset = datasets.load_dataset('mc4','iw',split='train+validation')`
`dataset = datasets.load_d... | https://github.com/huggingface/datasets/issues/2699 | [
"Hi ! That's missing indeed. We'll try to implement this for the next version :)\r\n\r\nI guess we just need to implement #2564 first, and then we should be able to add support for splits combinations"
] | null | 2,699 | false |
Ignore empty batch when writing | This prevents an schema update with unknown column types, as reported in #2644.
This is my first attempt at fixing the issue. I tested the following:
- First batch returned by a batched map operation is empty.
- An intermediate batch is empty.
- `python -m unittest tests.test_arrow_writer` passes.
However, `ar... | https://github.com/huggingface/datasets/pull/2698 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2698",
"html_url": "https://github.com/huggingface/datasets/pull/2698",
"diff_url": "https://github.com/huggingface/datasets/pull/2698.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2698.patch",
"merged_at": "2021-07-26T13:25... | 2,698 | true |
Fix import on Colab | Fix #2695, fix #2700. | https://github.com/huggingface/datasets/pull/2697 | [
"@lhoestq @albertvillanova - It might be a good idea to have a patch release after this gets merged (presumably tomorrow morning when you're around). The Colab issue linked to this PR is a pretty big blocker. "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2697",
"html_url": "https://github.com/huggingface/datasets/pull/2697",
"diff_url": "https://github.com/huggingface/datasets/pull/2697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2697.patch",
"merged_at": "2021-07-22T07:09... | 2,697 | true |
Add support for disable_progress_bar on Windows | This PR is a continuation of #2667 and adds support for `utils.disable_progress_bar()` on Windows when using multiprocessing. This [answer](https://stackoverflow.com/a/6596695/14095927) on SO explains it nicely why the current approach (with calling `utils.is_progress_bar_enabled()` inside `Dataset._map_single`) would ... | https://github.com/huggingface/datasets/pull/2696 | [
"The CI failure seems unrelated to this PR (probably has something to do with Transformers)."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2696",
"html_url": "https://github.com/huggingface/datasets/pull/2696",
"diff_url": "https://github.com/huggingface/datasets/pull/2696.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2696.patch",
"merged_at": "2021-07-26T09:38... | 2,696 | true |
Cannot import load_dataset on Colab | ## Describe the bug
Got tqdm concurrent module not found error during importing load_dataset from datasets.
## Steps to reproduce the bug
Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error
On colab:
```python
!pip install dataset... | https://github.com/huggingface/datasets/issues/2695 | [
"I'm facing the same issue on Colab today too.\r\n\r\n```\r\nModuleNotFoundError Traceback (most recent call last)\r\n<ipython-input-4-5833ac0f5437> in <module>()\r\n 3 \r\n 4 from ray import tune\r\n----> 5 from datasets import DatasetDict, Dataset\r\n 6 from datasets import lo... | null | 2,695 | false |
fix: 🐛 change string format to allow copy/paste to work in bash | Before: copy/paste resulted in an error because the square bracket
characters `[]` are special characters in bash | https://github.com/huggingface/datasets/pull/2694 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2694",
"html_url": "https://github.com/huggingface/datasets/pull/2694",
"diff_url": "https://github.com/huggingface/datasets/pull/2694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2694.patch",
"merged_at": "2021-07-22T10:41... | 2,694 | true |
Fix OSCAR Esperanto | The Esperanto part (original) of OSCAR has the wrong number of examples:
```python
from datasets import load_dataset
raw_datasets = load_dataset("oscar", "unshuffled_original_eo")
```
raises
```python
NonMatchingSplitsSizesError:
[{'expected': SplitInfo(name='train', num_bytes=314188336, num_examples=121171, da... | https://github.com/huggingface/datasets/pull/2693 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2693",
"html_url": "https://github.com/huggingface/datasets/pull/2693",
"diff_url": "https://github.com/huggingface/datasets/pull/2693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2693.patch",
"merged_at": "2021-07-21T14:53... | 2,693 | true |
Update BibTeX entry | Update BibTeX entry | https://github.com/huggingface/datasets/pull/2692 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2692",
"html_url": "https://github.com/huggingface/datasets/pull/2692",
"diff_url": "https://github.com/huggingface/datasets/pull/2692.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2692.patch",
"merged_at": "2021-07-21T15:31... | 2,692 | true |
xtreme / pan-x cannot be downloaded | ## Describe the bug
Dataset xtreme / pan-x cannot be loaded
Seems related to https://github.com/huggingface/datasets/pull/2326
## Steps to reproduce the bug
```python
dataset = load_dataset("xtreme", "PAN-X.fr")
```
## Expected results
Load the dataset
## Actual results
```
FileNotFoundError:... | https://github.com/huggingface/datasets/issues/2691 | [
"Hi @severo, thanks for reporting.\r\n\r\nHowever I have not been able to reproduce this issue. Could you please confirm if the problem persists for you?\r\n\r\nMaybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried.",
"Hmmm, the file (https://www.dropbox.com/s/dl/12h3qqog6q4bj... | null | 2,691 | false |
Docs details | Some comments here:
- the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + ... | https://github.com/huggingface/datasets/pull/2690 | [
"Thanks for all the comments and for the corrections in the docs !\r\n\r\nAbout all the points you mentioned:\r\n\r\n> * the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2690",
"html_url": "https://github.com/huggingface/datasets/pull/2690",
"diff_url": "https://github.com/huggingface/datasets/pull/2690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2690.patch",
"merged_at": "2021-07-27T18:40... | 2,690 | true |
cannot save the dataset to disk after rename_column | ## Describe the bug
If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk`
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
In [1]: from datasets import Dataset, load_from_disk
In [5]: dataset=Dataset.from_dict({'foo': [0]})... | https://github.com/huggingface/datasets/issues/2689 | [
"Hi ! That's because you are trying to overwrite a file that is already open and being used.\r\nIndeed `foo/dataset.arrow` is open and used by your `dataset` object.\r\n\r\nWhen you do `rename_column`, the resulting dataset reads the data from the same arrow file.\r\nIn other cases like when using `map` on the othe... | null | 2,689 | false |
hebrew language codes he and iw should be treated as aliases | https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. | https://github.com/huggingface/datasets/issues/2688 | [
"Hi @eyaler, thanks for reporting.\r\n\r\nWhile you are true with respect the Hebrew language tag (\"iw\" is deprecated and \"he\" is the preferred value), in the \"mc4\" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datas... | null | 2,688 | false |
Minor documentation fix | Currently, [Writing a dataset loading script](https://huggingface.co/docs/datasets/add_dataset.html) page has a small error. A link to `matinf` dataset in [_Dataset scripts of reference_](https://huggingface.co/docs/datasets/add_dataset.html#dataset-scripts-of-reference) section actually leads to `xsquad`, instead. Thi... | https://github.com/huggingface/datasets/pull/2687 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2687",
"html_url": "https://github.com/huggingface/datasets/pull/2687",
"diff_url": "https://github.com/huggingface/datasets/pull/2687.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2687.patch",
"merged_at": "2021-07-21T13:04... | 2,687 | true |
Fix bad config ids that name cache directories | `data_dir=None` was considered a dataset config parameter, hence creating a special config_id for all dataset being loaded.
Since the config_id is used to name the cache directories, this leaded to datasets being regenerated for users.
I fixed this by ignoring the value of `data_dir` when it's `None` when computing... | https://github.com/huggingface/datasets/pull/2686 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2686",
"html_url": "https://github.com/huggingface/datasets/pull/2686",
"diff_url": "https://github.com/huggingface/datasets/pull/2686.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2686.patch",
"merged_at": "2021-07-20T16:27... | 2,686 | true |
Fix Blog Authorship Corpus dataset | This PR:
- Update the JSON metadata file, which previously was raising a `NonMatchingSplitsSizesError`
- Fix the codec of the data files (`latin_1` instead of `utf-8`), which previously was raising ` UnicodeDecodeError` for some files
Close #2679. | https://github.com/huggingface/datasets/pull/2685 | [
"Normally, I'm expecting errors from the validation of the README file... 😅 ",
"That is:\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_cards.py::test_changed_dataset_card[blog_authorship_corpus]\r\n==== 1 failed, 3182 passed, 2763 skipped,... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2685",
"html_url": "https://github.com/huggingface/datasets/pull/2685",
"diff_url": "https://github.com/huggingface/datasets/pull/2685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2685.patch",
"merged_at": "2021-07-21T13:11... | 2,685 | true |
Print absolute local paths in load_dataset error messages | Use absolute local paths in the error messages of `load_dataset` as per @stas00's suggestion in https://github.com/huggingface/datasets/pull/2500#issuecomment-874891223 | https://github.com/huggingface/datasets/pull/2684 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2684",
"html_url": "https://github.com/huggingface/datasets/pull/2684",
"diff_url": "https://github.com/huggingface/datasets/pull/2684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2684.patch",
"merged_at": "2021-07-22T14:01... | 2,684 | true |
Cache directories changed due to recent changes in how config kwargs are handled | Since #2659 I can see weird cache directory names with hashes in the config id, even though no additional config kwargs are passed. For example:
```python
from datasets import load_dataset_builder
c4_builder = load_dataset_builder("c4", "en")
print(c4_builder.cache_dir)
# /Users/quentinlhoest/.cache/huggingfac... | https://github.com/huggingface/datasets/issues/2683 | [] | null | 2,683 | false |
Fix c4 expected files | Some files were not registered in the list of expected files to download
Fix https://github.com/huggingface/datasets/issues/2677 | https://github.com/huggingface/datasets/pull/2682 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2682",
"html_url": "https://github.com/huggingface/datasets/pull/2682",
"diff_url": "https://github.com/huggingface/datasets/pull/2682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2682.patch",
"merged_at": "2021-07-20T14:38... | 2,682 | true |
5 duplicate datasets | ## Describe the bug
In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are:
- https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch
<img width="838... | https://github.com/huggingface/datasets/issues/2681 | [
"Yes this was documented in the PR that added this hf->paperswithcode mapping (https://github.com/huggingface/datasets/pull/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix\r\n\r\nFor context on the paperswithcode mapping you can also refer to https://github.com/huggingface/huggingfac... | null | 2,681 | false |
feat: 🎸 add paperswithcode id for qasper dataset | The reverse reference exists on paperswithcode:
https://paperswithcode.com/dataset/qasper | https://github.com/huggingface/datasets/pull/2680 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2680",
"html_url": "https://github.com/huggingface/datasets/pull/2680",
"diff_url": "https://github.com/huggingface/datasets/pull/2680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2680.patch",
"merged_at": "2021-07-20T14:04... | 2,680 | true |
Cannot load the blog_authorship_corpus due to codec errors | ## Describe the bug
A codec error is raised while loading the blog_authorship_corpus.
## Steps to reproduce the bug
```
from datasets import load_dataset
raw_datasets = load_dataset("blog_authorship_corpus")
```
## Expected results
Loading the dataset without errors.
## Actual results
An error simila... | https://github.com/huggingface/datasets/issues/2679 | [
"Hi @izaskr, thanks for reporting.\r\n\r\nHowever the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback...\r\n\r\nI'm going to have a look at the dataset anyway...",
"Hi @izaskr, thanks... | null | 2,679 | false |
Import Error in Kaggle notebook | ## Describe the bug
Not able to import datasets library in kaggle notebooks
## Steps to reproduce the bug
```python
!pip install datasets
import datasets
```
## Expected results
No such error
## Actual results
```
ImportError Traceback (most recent call last)
<ipython-inp... | https://github.com/huggingface/datasets/issues/2678 | [
"This looks like an issue with PyArrow. Did you try reinstalling it ?",
"@lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error.\r\n\r\nEdit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstal... | null | 2,678 | false |
Error when downloading C4 | Hi,
I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive:
`datasets.load_dataset('c4', 'en')`
Is this a bug or do I have some configurations missing on my server?
Thanks!
<img width="1014" alt="Снимок экрана 2... | https://github.com/huggingface/datasets/issues/2677 | [
"Hi Thanks for reporting !\r\nIt looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)",
"Alright this is fixed now. We'll do a new release soon to make the fix available.\r\n\r\nIn the meantime feel free to simply pass `ignore_verifications=True` to `load... | null | 2,677 | false |
Increase json reader block_size automatically | Currently some files can't be read with the default parameters of the JSON lines reader.
For example this one:
https://huggingface.co/datasets/thomwolf/codeparrot/resolve/main/file-000000000006.json.gz
raises a pyarrow error:
```python
ArrowInvalid: straddling object straddles two block boundaries (try to increa... | https://github.com/huggingface/datasets/pull/2676 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2676",
"html_url": "https://github.com/huggingface/datasets/pull/2676",
"diff_url": "https://github.com/huggingface/datasets/pull/2676.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2676.patch",
"merged_at": "2021-07-19T17:51... | 2,676 | true |
Parallelize ETag requests | Since https://github.com/huggingface/datasets/pull/2628 we use the ETag or the remote data files to compute the directory in the cache where a dataset is saved. This is useful in order to reload the dataset from the cache only if the remote files haven't changed.
In this I made the ETag requests parallel using multi... | https://github.com/huggingface/datasets/pull/2675 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2675",
"html_url": "https://github.com/huggingface/datasets/pull/2675",
"diff_url": "https://github.com/huggingface/datasets/pull/2675.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2675.patch",
"merged_at": "2021-07-19T19:33... | 2,675 | true |
Fix sacrebleu parameter name | DONE:
- Fix parameter name: `smooth` to `smooth_method`.
- Improve kwargs description.
- Align docs on using a metric.
- Add example of passing additional arguments in using metrics.
Related to #2669. | https://github.com/huggingface/datasets/pull/2674 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2674",
"html_url": "https://github.com/huggingface/datasets/pull/2674",
"diff_url": "https://github.com/huggingface/datasets/pull/2674.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2674.patch",
"merged_at": "2021-07-19T08:07... | 2,674 | true |
Fix potential DuplicatedKeysError in SQuAD | DONE:
- Fix potential DiplicatedKeysError by ensuring keys are unique.
- Align examples in the docs with SQuAD code.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | https://github.com/huggingface/datasets/pull/2673 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2673",
"html_url": "https://github.com/huggingface/datasets/pull/2673",
"diff_url": "https://github.com/huggingface/datasets/pull/2673.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2673.patch",
"merged_at": "2021-07-19T07:08... | 2,673 | true |
Fix potential DuplicatedKeysError in LibriSpeech | DONE:
- Fix unnecessary path join.
- Fix potential DiplicatedKeysError by ensuring keys are unique.
We should promote as a good practice, that the keys should be programmatically generated as unique, instead of read from data (which might be not unique). | https://github.com/huggingface/datasets/pull/2672 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2672",
"html_url": "https://github.com/huggingface/datasets/pull/2672",
"diff_url": "https://github.com/huggingface/datasets/pull/2672.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2672.patch",
"merged_at": "2021-07-19T06:28... | 2,672 | true |
Mesinesp development and training data sets have been added. | https://zenodo.org/search?page=1&size=20&q=mesinesp, Mesinesp has Medical Semantic Indexed records in Spanish. Indexing is done using DeCS codes, a sort of Spanish equivalent to MeSH terms.
The Mesinesp (Spanish BioASQ track, see https://temu.bsc.es/mesinesp) development set has a total of 750 records.
The Mesinesp ... | https://github.com/huggingface/datasets/pull/2671 | [
"It'll be new pull request with new commits."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2671",
"html_url": "https://github.com/huggingface/datasets/pull/2671",
"diff_url": "https://github.com/huggingface/datasets/pull/2671.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2671.patch",
"merged_at": null
} | 2,671 | true |
Using sharding to parallelize indexing | **Is your feature request related to a problem? Please describe.**
Creating an elasticsearch index on large dataset could be quite long and cannot be parallelized on shard (the index creation is colliding)
**Describe the solution you'd like**
When working on dataset shards, if an index already exists, its mapping ... | https://github.com/huggingface/datasets/issues/2670 | [] | null | 2,670 | false |
Metric kwargs are not passed to underlying external metric f1_score | ## Describe the bug
When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so.
## Steps to... | https://github.com/huggingface/datasets/issues/2669 | [
"Hi @BramVanroy, thanks for reporting.\r\n\r\nFirst, note that `\"min\"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{\"micro\", \"macro\", \"samples\", \"weigh... | null | 2,669 | false |
Add Russian SuperGLUE | Hi,
This adds the [Russian SuperGLUE](https://russiansuperglue.com/) dataset. For the most part I reused the code for the original SuperGLUE, although there are some relatively minor differences in the structure that I accounted for. | https://github.com/huggingface/datasets/pull/2668 | [
"Added the missing label classes and their explanations (to the best of my understanding)",
"Thanks a lot ! Once the last comment about the label names is addressed we can merge :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2668",
"html_url": "https://github.com/huggingface/datasets/pull/2668",
"diff_url": "https://github.com/huggingface/datasets/pull/2668.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2668.patch",
"merged_at": "2021-07-29T11:50... | 2,668 | true |
Use tqdm from tqdm_utils | This PR replaces `tqdm` from the `tqdm` lib with `tqdm` from `datasets.utils.tqdm_utils`. With this change, it's possible to disable progress bars just by calling `disable_progress_bar`. Note this doesn't work on Windows when using multiprocessing due to how global variables are shared between processes. Currently, the... | https://github.com/huggingface/datasets/pull/2667 | [
"The current CI failure is due to modifications in the dataset script.",
"Merging since the CI is only failing because of dataset card issues, which is unrelated to this PR"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2667",
"html_url": "https://github.com/huggingface/datasets/pull/2667",
"diff_url": "https://github.com/huggingface/datasets/pull/2667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2667.patch",
"merged_at": "2021-07-19T17:32... | 2,667 | true |
Adds CodeClippy dataset [WIP] | CodeClippy is an opensource code dataset scrapped from github during flax-jax-community-week
https://the-eye.eu/public/AI/training_data/code_clippy_data/ | https://github.com/huggingface/datasets/pull/2666 | [
"Thanks for your contribution, @arampacha. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2666",
"html_url": "https://github.com/huggingface/datasets/pull/2666",
"diff_url": "https://github.com/huggingface/datasets/pull/2666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2666.patch",
"merged_at": null
} | 2,666 | true |
Adds APPS dataset to the hub [WIP] | A loading script for [APPS dataset](https://github.com/hendrycks/apps) | https://github.com/huggingface/datasets/pull/2665 | [
"Thanks for your contribution, @arampacha. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us if... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2665",
"html_url": "https://github.com/huggingface/datasets/pull/2665",
"diff_url": "https://github.com/huggingface/datasets/pull/2665.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2665.patch",
"merged_at": null
} | 2,665 | true |
[`to_json`] add multi-proc sharding support | As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR.
I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i... | https://github.com/huggingface/datasets/issues/2663 | [
"Hi @stas00, \r\nI want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow ... | null | 2,663 | false |
Load Dataset from the Hub (NO DATASET SCRIPT) | ## Load the data from any Dataset repository on the Hub
This PR adds support for loading datasets from any dataset repository on the hub, without requiring any dataset script.
As a user it's now possible to create a repo and upload some csv/json/text/parquet files, and then be able to load the data in one line. H... | https://github.com/huggingface/datasets/pull/2662 | [
"This is ready for review now :)\r\n\r\nI would love to have some feedback on the changes in load.py @albertvillanova. There are many changes so if you have questions let me know, especially on the `resolve_data_files` functions and on the changes in `prepare_module`.\r\n\r\nAnd @thomwolf if you want to take a look... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2662",
"html_url": "https://github.com/huggingface/datasets/pull/2662",
"diff_url": "https://github.com/huggingface/datasets/pull/2662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2662.patch",
"merged_at": "2021-08-25T14:18... | 2,662 | true |
Add SD task for SUPERB | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
TODO:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Upl... | https://github.com/huggingface/datasets/pull/2661 | [
"I make a summary about our discussion with @lewtun and @Narsil on the agreed schema for this dataset and the additional steps required to generate the 2D array labels:\r\n- The labels for this dataset are a 2D array:\r\n Given an example:\r\n ```python\r\n {\"record_id\": record_id, \"file\": file, \"start\": s... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2661",
"html_url": "https://github.com/huggingface/datasets/pull/2661",
"diff_url": "https://github.com/huggingface/datasets/pull/2661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2661.patch",
"merged_at": "2021-08-04T17:03... | 2,661 | true |
Move checks from _map_single to map | The goal of this PR is to remove duplicated checks in the `map` logic to execute them only once whenever possible (`fn_kwargs`, `input_columns`, ...). Additionally, this PR improves the consistency (to align it with `input_columns`) of the `remove_columns` check by adding support for a single string value, which is the... | https://github.com/huggingface/datasets/pull/2660 | [
"@lhoestq This one has been open for a while. Could you please take a look?",
"@lhoestq Ready for the final review!",
"I forgot to update the signature of `DatasetDict.map`, so did that now."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2660",
"html_url": "https://github.com/huggingface/datasets/pull/2660",
"diff_url": "https://github.com/huggingface/datasets/pull/2660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2660.patch",
"merged_at": "2021-09-06T14:12... | 2,660 | true |
Allow dataset config kwargs to be None | Close https://github.com/huggingface/datasets/issues/2658
The dataset config kwargs that were set to None we simply ignored.
This was an issue when None has some meaning for certain parameters of certain builders, like the `sep` parameter of the "csv" builder that allows to infer to separator.
cc @SBrandeis | https://github.com/huggingface/datasets/pull/2659 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2659",
"html_url": "https://github.com/huggingface/datasets/pull/2659",
"diff_url": "https://github.com/huggingface/datasets/pull/2659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2659.patch",
"merged_at": "2021-07-16T12:46... | 2,659 | true |
Can't pass `sep=None` to load_dataset("csv", ...) to infer the separator via pandas.read_csv | When doing `load_dataset("csv", sep=None)`, the `sep` passed to `pd.read_csv` is still the default `sep=","` instead, which makes it impossible to make the csv loader infer the separator.
Related to https://github.com/huggingface/datasets/pull/2656
cc @SBrandeis | https://github.com/huggingface/datasets/issues/2658 | [] | null | 2,658 | false |
`to_json` reporting enhancements | While using `to_json` 2 things came to mind that would have made the experience easier on the user:
1. Could we have a `desc` arg for the tqdm use and a fallback to just `to_json` so that it'd be clear to the user what's happening? Surely, one can just print the description before calling json, but I thought perhaps... | https://github.com/huggingface/datasets/issues/2657 | [] | null | 2,657 | false |
Change `from_csv` default arguments | Passing `sep=None` to pandas's `read_csv` lets pandas guess the CSV file's separator
This PR allows users to use this pandas's feature by passing `sep=None` to `Dataset.from_csv`:
```python
Dataset.from_csv(
...,
sep=None
)
``` | https://github.com/huggingface/datasets/pull/2656 | [
"This is not the default in pandas right ?\r\nWe try to align our CSV loader with the pandas API.\r\n\r\nMoreover according to their documentation, the python parser is used when sep is None, which might not be the fastest one.\r\n\r\nMaybe users could just specify `sep=None` themselves ?\r\nIn this case we should ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2656",
"html_url": "https://github.com/huggingface/datasets/pull/2656",
"diff_url": "https://github.com/huggingface/datasets/pull/2656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2656.patch",
"merged_at": null
} | 2,656 | true |
Allow the selection of multiple columns at once | **Is your feature request related to a problem? Please describe.**
Similar to pandas, it would be great if we could select multiple columns at once.
**Describe the solution you'd like**
```python
my_dataset = ... # Has columns ['idx', 'sentence', 'label']
idx, label = my_dataset[['idx', 'label']]
```
**... | https://github.com/huggingface/datasets/issues/2655 | [
"Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense. \r\nIs there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like on... | null | 2,655 | false |
Give a user feedback if the dataset he loads is streamable or not | **Is your feature request related to a problem? Please describe.**
I would love to know if a `dataset` is with the current implementation streamable or not.
**Describe the solution you'd like**
We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g.... | https://github.com/huggingface/datasets/issues/2654 | [
"#self-assign",
"I understand it already raises a `NotImplementedError` exception, eg:\r\n\r\n```\r\n>>> dataset = load_dataset(\"journalists_questions\", name=\"plain_text\", split=\"train\", streaming=True)\r\n\r\n[...]\r\nNotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=d... | null | 2,654 | false |
Add SD task for SUPERB | Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization).
Steps:
- [x] Generate the LibriMix corpus
- [x] Prepare the corpus for diarization
- [x] Up... | https://github.com/huggingface/datasets/issues/2653 | [
"Note that this subset requires us to:\r\n\r\n* generate the LibriMix corpus from LibriSpeech\r\n* prepare the corpus for diarization\r\n\r\nAs suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data\r\n... | null | 2,653 | false |
Fix logging docstring | Remove "no tqdm bars" from the docstring in the logging module to align it with the changes introduced in #2534. | https://github.com/huggingface/datasets/pull/2652 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2652",
"html_url": "https://github.com/huggingface/datasets/pull/2652",
"diff_url": "https://github.com/huggingface/datasets/pull/2652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2652.patch",
"merged_at": "2021-07-15T09:57... | 2,652 | true |
Setting log level higher than warning does not suppress progress bar | ## Describe the bug
I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well).
According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0.
I also tried to set `DATASETS_VERBOS... | https://github.com/huggingface/datasets/issues/2651 | [
"Hi,\r\n\r\nyou can suppress progress bars by patching logging as follows:\r\n```python\r\nimport datasets\r\nimport logging\r\ndatasets.logging.get_verbosity = lambda: logging.NOTSET\r\n# map call ...\r\n```\r\nEDIT: now you have to use `disable_progress_bar `",
"Thank you, it worked :)",
"See https://github.c... | null | 2,651 | false |
[load_dataset] shard and parallelize the process | - Some huge datasets take forever to build the first time. (e.g. oscar/en) as it's done in a single cpu core.
- If the build crashes, everything done up to that point gets lost
Request: Shard the build over multiple arrow files, which would enable:
- much faster build by parallelizing the build process
- if the p... | https://github.com/huggingface/datasets/issues/2650 | [
"I need the same feature for distributed training",
"I think @TevenLeScao is exploring adding multiprocessing in `GeneratorBasedBuilder._prepare_split` - feel free to post updates here :)",
"Posted a PR to address the building side, still needs something to load sharded arrow files + tests"
] | null | 2,650 | false |
adding progress bar / ETA for `load_dataset` | Please consider:
```
Downloading and preparing dataset oscar/unshuffled_deduplicated_en (download: 462.40 GiB, generated: 1.18 TiB, post-processed: Unknown size, total: 1.63 TiB) to cache/oscar/unshuffled_deduplicated_en/1.0.0/84838bd49d2295f62008383b05620571535451d84545037bb94d6f3501651df2...
HF google storage unre... | https://github.com/huggingface/datasets/issues/2649 | [] | null | 2,649 | false |
Add web_split dataset for Paraphase and Rephrase benchmark | ## Describe:
For getting simple sentences from complex sentence there are dataset and task like wiki_split that is available in hugging face datasets. This web_split is a very similar dataset. There some research paper which states that by combining these two datasets we if we train the model it will yield better resu... | https://github.com/huggingface/datasets/issues/2648 | [
"#take"
] | null | 2,648 | false |
Fix anchor in README | I forgot to push this fix in #2611, so I'm sending it now. | https://github.com/huggingface/datasets/pull/2647 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2647",
"html_url": "https://github.com/huggingface/datasets/pull/2647",
"diff_url": "https://github.com/huggingface/datasets/pull/2647.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2647.patch",
"merged_at": "2021-07-15T06:50... | 2,647 | true |
downloading of yahoo_answers_topics dataset failed | ## Describe the bug
I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset
## Steps to reproduce the bug
self.dataset = load_dataset(
'yahoo_answers_topics', cache_dir=self.config... | https://github.com/huggingface/datasets/issues/2646 | [
"Hi ! I just tested and it worked fine today for me.\r\n\r\nI think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 \r\n\r\nFeel free to try again today, now that the quota wa... | null | 2,646 | false |
load_dataset processing failed with OS error after downloading a dataset | ## Describe the bug
After downloading a dataset like opus100, there is a bug that
OSError: Cannot find data file.
Original error:
dlopen: cannot load any more object with static TLS
## Steps to reproduce the bug
```python
from datasets import load_dataset
this_dataset = load_dataset('opus100', 'af-en')
```
... | https://github.com/huggingface/datasets/issues/2645 | [
"Hi ! It looks like an issue with pytorch.\r\n\r\nCould you try to run `import torch` and see if it raises an error ?",
"> Hi ! It looks like an issue with pytorch.\r\n> \r\n> Could you try to run `import torch` and see if it raises an error ?\r\n\r\nIt works. Thank you!"
] | null | 2,645 | false |
Batched `map` not allowed to return 0 items | ## Describe the bug
I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting... | https://github.com/huggingface/datasets/issues/2644 | [
"Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed.",
"Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week.",
"Sure if you're interes... | null | 2,644 | false |
Enum used in map functions will raise a RecursionError with dill. | ## Describe the bug
Enums used in functions pass to `map` will fail at pickling with a maximum recursion exception as described here: https://github.com/uqfoundation/dill/issues/250#issuecomment-852566284
In my particular case, I use an enum to define an argument with fixed options using the `TraininigArguments` ... | https://github.com/huggingface/datasets/issues/2643 | [
"I'm running into this as well. (Thank you so much for reporting @jorgeecardona — was staring at this massive stack trace and unsure what exactly was wrong!)",
"Hi ! Thanks for reporting :)\r\n\r\nUntil this is fixed on `dill`'s side, we could implement a custom saving in our Pickler indefined in utils.py_utils.p... | null | 2,643 | false |
Support multi-worker with streaming dataset (IterableDataset). | **Is your feature request related to a problem? Please describe.**
The current `.map` does not support multi-process, CPU can become bottleneck if the pre-processing is complex (e.g. t5 span masking).
**Describe the solution you'd like**
Ideally `.map` should support multi-worker like tfds, with `AUTOTUNE`.
**D... | https://github.com/huggingface/datasets/issues/2642 | [
"Hi ! This is a great idea :)\r\nI think we could have something similar to what we have in `datasets.Dataset.map`, i.e. a `num_proc` parameter that tells how many processes to spawn to parallelize the data processing. \r\n\r\nRegarding AUTOTUNE, this could be a nice feature as well, we could see how to add it in a... | null | 2,642 | false |
load_dataset("financial_phrasebank") NonMatchingChecksumError | ## Describe the bug
Attempting to download the financial_phrasebank dataset results in a NonMatchingChecksumError
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("financial_phrasebank", 'sentences_allagree')
```
## Expected results
I expect to see the financi... | https://github.com/huggingface/datasets/issues/2641 | [
"Hi! It's probably because this dataset is stored on google drive and it has a per day quota limit. It should work if you retry, I was able to initiate the download.\r\n\r\nSimilar issue [here](https://github.com/huggingface/datasets/issues/2646)",
"Hi ! Loading the dataset works on my side as well.\r\nFeel free ... | null | 2,641 | false |
Fix docstrings | Fix rendering of some docstrings. | https://github.com/huggingface/datasets/pull/2640 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2640",
"html_url": "https://github.com/huggingface/datasets/pull/2640",
"diff_url": "https://github.com/huggingface/datasets/pull/2640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2640.patch",
"merged_at": "2021-07-15T06:06... | 2,640 | true |
Refactor patching to specific submodule | Minor reorganization of the code, so that additional patching functions (not related to streaming) might be created.
In relation with the initial approach followed in #2631. | https://github.com/huggingface/datasets/pull/2639 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2639",
"html_url": "https://github.com/huggingface/datasets/pull/2639",
"diff_url": "https://github.com/huggingface/datasets/pull/2639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2639.patch",
"merged_at": "2021-07-13T16:52... | 2,639 | true |
Streaming for the Json loader | It was not using `open` in the builder. Therefore `pyarrow.json.read_json` was downloading the full file to start yielding rows.
Moreover, it appeared that `pyarrow.json.read_json` was not really suited for streaming as it was downloading too much data and failing if `block_size` was not properly configured (related... | https://github.com/huggingface/datasets/pull/2638 | [
"A note is that I think we should add a few indicator of status (as mentioned by @stas00 in #2649), probably at the (1) downloading, (2) extracting and (3) reading steps. In particular when loading many very large files it's interesting to know a bit where we are in the process.",
"I tested locally, and the built... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2638",
"html_url": "https://github.com/huggingface/datasets/pull/2638",
"diff_url": "https://github.com/huggingface/datasets/pull/2638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2638.patch",
"merged_at": "2021-07-16T15:59... | 2,638 | true |
Streaming for the Pandas loader | It was not using open in the builder. Therefore pd.read_pickle could fail when streaming from a private repo for example.
Indeed, when streaming, open is extended to support reading from remote files and handles authentication to the HF Hub | https://github.com/huggingface/datasets/pull/2636 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2636",
"html_url": "https://github.com/huggingface/datasets/pull/2636",
"diff_url": "https://github.com/huggingface/datasets/pull/2636.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2636.patch",
"merged_at": "2021-07-13T14:37... | 2,636 | true |
Streaming for the CSV loader | It was not using `open` in the builder. Therefore `pd.read_csv` was downloading the full file to start yielding rows.
Indeed, when streaming, `open` is extended to support reading from remote file progressively. | https://github.com/huggingface/datasets/pull/2635 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2635",
"html_url": "https://github.com/huggingface/datasets/pull/2635",
"diff_url": "https://github.com/huggingface/datasets/pull/2635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2635.patch",
"merged_at": "2021-07-13T15:19... | 2,635 | true |
Inject ASR template for lj_speech dataset | Related to: #2565, #2633.
cc: @lewtun | https://github.com/huggingface/datasets/pull/2634 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2634",
"html_url": "https://github.com/huggingface/datasets/pull/2634",
"diff_url": "https://github.com/huggingface/datasets/pull/2634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2634.patch",
"merged_at": "2021-07-13T09:05... | 2,634 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.