title
stringlengths
1
290
body
stringlengths
0
228k
βŒ€
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Add SEDE dataset
This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card. Please see our paper for more details: https://arxiv.org/abs/2106.05006
https://github.com/huggingface/datasets/pull/2942
[ "Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.", "Hi @Hazoom,\r\n\r\nYou were right: the ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2942", "html_url": "https://github.com/huggingface/datasets/pull/2942", "diff_url": "https://github.com/huggingface/datasets/pull/2942.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2942.patch", "merged_at": "2021-09-24T10:39...
2,942
true
OSCAR unshuffled_original_ko: NonMatchingSplitsSizesError
## Describe the bug Cannot download OSCAR `unshuffled_original_ko` due to `NonMatchingSplitsSizesError`. ## Steps to reproduce the bug ```python >>> dataset = datasets.load_dataset('oscar', 'unshuffled_original_ko') NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=25292102197, num...
https://github.com/huggingface/datasets/issues/2941
[ "I tried `unshuffled_original_da` and it is also not working" ]
null
2,941
false
add swedish_medical_ner dataset
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://github.com/huggingface/datasets/pull/2940
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2940", "html_url": "https://github.com/huggingface/datasets/pull/2940", "diff_url": "https://github.com/huggingface/datasets/pull/2940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2940.patch", "merged_at": "2021-10-05T12:13...
2,940
true
MENYO-20k repo has moved, updating URL
Dataset repo moved to https://github.com/uds-lsv/menyo-20k_MT, now editing URL to match. https://github.com/uds-lsv/menyo-20k_MT/blob/master/data/train.tsv is the file we're looking for
https://github.com/huggingface/datasets/pull/2939
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2939", "html_url": "https://github.com/huggingface/datasets/pull/2939", "diff_url": "https://github.com/huggingface/datasets/pull/2939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2939.patch", "merged_at": "2021-09-21T15:31...
2,939
true
Take namespace into account in caching
Loading a dataset "username/dataset_name" hosted by a user on the hub used to cache the dataset only taking into account the dataset name, and ignorign the username. Because of this, if a user later loads "dataset_name" without specifying the username, it would reload the dataset from the cache instead of failing. I...
https://github.com/huggingface/datasets/pull/2938
[ "We might have collisions if a username and a dataset_name are the same. Maybe instead serialize the dataset name by replacing `/` with some string, eg `__SLASH__`, that will hopefully never appear in a dataset or user name (it's what I did in https://github.com/huggingface/datasets-preview-backend/blob/master/benc...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2938", "html_url": "https://github.com/huggingface/datasets/pull/2938", "diff_url": "https://github.com/huggingface/datasets/pull/2938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2938.patch", "merged_at": "2021-09-29T13:01...
2,938
true
load_dataset using default cache on Windows causes PermissionError: [WinError 5] Access is denied
## Describe the bug Standard process to download and load the wiki_bio dataset causes PermissionError in Windows 10 and 11. ## Steps to reproduce the bug ```python from datasets import load_dataset ds = load_dataset('wiki_bio') ``` ## Expected results It is expected that the dataset downloads without any er...
https://github.com/huggingface/datasets/issues/2937
[ "Hi @daqieq, thanks for reporting.\r\n\r\nUnfortunately, I was not able to reproduce this bug:\r\n```ipython\r\nIn [1]: from datasets import load_dataset\r\n ...: ds = load_dataset('wiki_bio')\r\nDownloading: 7.58kB [00:00, 26.3kB/s]\r\nDownloading: 2.71kB [00:00, ?B/s]\r\nUsing custom data configuration default\...
null
2,937
false
Check that array is not Float as nan != nan
The Exception wants to check for issues with StructArrays/ListArrays but catches FloatArrays with value nan as nan != nan. Pass on FloatArrays as we should not raise an Exception for them.
https://github.com/huggingface/datasets/pull/2936
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2936", "html_url": "https://github.com/huggingface/datasets/pull/2936", "diff_url": "https://github.com/huggingface/datasets/pull/2936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2936.patch", "merged_at": "2021-09-21T09:39...
2,936
true
Add Jigsaw unintended Bias
Hi, Here's a first attempt at this dataset. Would be great if it could be merged relatively quickly as it is needed for Bigscience-related stuff. This requires manual download, and I had some trouble generating dummy_data in this setting, so welcoming feedback there.
https://github.com/huggingface/datasets/pull/2935
[ "Note that the tests seem to fail because of a bug in an Exception at the moment, see: https://github.com/huggingface/datasets/pull/2936 for the fix", "@lhoestq implemented your changes, I think this might be ready for another look.", "Thanks @lhoestq, implemented the changes, let me know if anything else pops ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2935", "html_url": "https://github.com/huggingface/datasets/pull/2935", "diff_url": "https://github.com/huggingface/datasets/pull/2935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2935.patch", "merged_at": "2021-09-24T10:41...
2,935
true
to_tf_dataset keeps a reference to the open data somewhere, causing issues on windows
To reproduce: ```python import datasets as ds import weakref import gc d = ds.load_dataset("mnist", split="train") ref = weakref.ref(d._data.table) tfd = d.to_tf_dataset("image", batch_size=1, shuffle=False, label_cols="label") del tfd, d gc.collect() assert ref() is None, "Error: there is at least one refe...
https://github.com/huggingface/datasets/issues/2934
[ "I did some investigation and, as it seems, the bug stems from [this line](https://github.com/huggingface/datasets/blob/8004d7c3e1d74b29c3e5b0d1660331cd26758363/src/datasets/arrow_dataset.py#L325). The lifecycle of the dataset from the linked line is bound to one of the returned `tf.data.Dataset`. So my (hacky) sol...
null
2,934
false
Replace script_version with revision
As discussed in https://github.com/huggingface/datasets/pull/2718#discussion_r707013278, the parameter name `script_version` is no longer applicable to datasets without loading script (i.e., datasets only with raw data files). This PR replaces the parameter name `script_version` with `revision`. This way, we are ...
https://github.com/huggingface/datasets/pull/2933
[ "I'm also fine with the removal in 1.15" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2933", "html_url": "https://github.com/huggingface/datasets/pull/2933", "diff_url": "https://github.com/huggingface/datasets/pull/2933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2933.patch", "merged_at": "2021-09-20T09:52...
2,933
true
Conda build fails
## Describe the bug Current `datasets` version in conda is 1.9 instead of 1.12. The build of the conda package fails.
https://github.com/huggingface/datasets/issues/2932
[ "Why 1.9 ?\r\n\r\nhttps://anaconda.org/HuggingFace/datasets currently says 1.11", "Alright I added 1.12.0 and 1.12.1 and fixed the conda build #2952 " ]
null
2,932
false
Fix bug in to_tf_dataset
Replace `set_format()` to `with_format()` so that we don't alter the original dataset in `to_tf_dataset()`
https://github.com/huggingface/datasets/pull/2931
[ "I'm going to merge it, but yeah - hopefully the CI runner just cleans that up automatically and few other people run the tests on Windows anyway!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2931", "html_url": "https://github.com/huggingface/datasets/pull/2931", "diff_url": "https://github.com/huggingface/datasets/pull/2931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2931.patch", "merged_at": "2021-09-16T17:01...
2,931
true
Mutable columns argument breaks set_format
## Describe the bug If you pass a mutable list to the `columns` argument of `set_format` and then change the list afterwards, the returned columns also change. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("glue", "cola") column_list = ["idx", "label"] datas...
https://github.com/huggingface/datasets/issues/2930
[ "Pushed a fix to my branch #2731 " ]
null
2,930
false
Add regression test for null Sequence
Relates to #2892 and #2900.
https://github.com/huggingface/datasets/pull/2929
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2929", "html_url": "https://github.com/huggingface/datasets/pull/2929", "diff_url": "https://github.com/huggingface/datasets/pull/2929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2929.patch", "merged_at": "2021-09-17T08:23...
2,929
true
Update BibTeX entry
Update BibTeX entry.
https://github.com/huggingface/datasets/pull/2928
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2928", "html_url": "https://github.com/huggingface/datasets/pull/2928", "diff_url": "https://github.com/huggingface/datasets/pull/2928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2928.patch", "merged_at": "2021-09-16T12:35...
2,928
true
Datasets 1.12 dataset.filter TypeError: get_indices_from_mask_function() got an unexpected keyword argument
## Describe the bug Upgrading to 1.12 caused `dataset.filter` call to fail with > get_indices_from_mask_function() got an unexpected keyword argument valid_rel_labels ## Steps to reproduce the bug ```pythondef filter_good_rows( ex: Dict, valid_rel_labels: Set[str], valid_ner_labels: Set[st...
https://github.com/huggingface/datasets/issues/2927
[ "Thanks for reporting, I'm looking into it :)", "Fixed by #2950." ]
null
2,927
false
Error when downloading datasets to non-traditional cache directories
## Describe the bug When the cache directory is linked (soft link) to a directory on a NetApp device, the download fails. ## Steps to reproduce the bug ```bash ln -s /path/to/netapp/.cache ~/.cache ``` ```python load_dataset("imdb") ``` ## Expected results Successfully loading IMDB dataset ## Actual...
https://github.com/huggingface/datasets/issues/2926
[ "Same here !" ]
null
2,926
false
Add tutorial for no-code dataset upload
This PR is for a tutorial for uploading a dataset to the Hub. It relies on the Hub UI elements to upload a dataset, introduces the online tagging tool for creating tags, and the Dataset card template to get a head start on filling it out. The addition of this tutorial should make it easier for beginners to upload a dat...
https://github.com/huggingface/datasets/pull/2925
[ "Cool, love it ! :)\r\n\r\nFeel free to add a paragraph saying how to load the dataset:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"stevhliu/demo\")\r\n\r\n# or to separate each csv file into several splits\r\ndata_files = {\"train\": \"train.csv\", \"test\": \"test.csv\"}\r\nd...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2925", "html_url": "https://github.com/huggingface/datasets/pull/2925", "diff_url": "https://github.com/huggingface/datasets/pull/2925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2925.patch", "merged_at": "2021-09-27T17:51...
2,925
true
"File name too long" error for file locks
## Describe the bug Getting the following error when calling `load_dataset("gar1t/test")`: ``` OSError: [Errno 36] File name too long: '<user>/.cache/huggingface/datasets/_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.inc...
https://github.com/huggingface/datasets/issues/2924
[ "Hi, the filename here is less than 255\r\n```python\r\n>>> len(\"_home_garrett_.cache_huggingface_datasets_csv_test-7c856aea083a7043_0.0.0_9144e0a4e8435090117cea53e6c7537173ef2304525df4a077c435d8ee7828ff.incomplete.lock\")\r\n154\r\n```\r\nso not sure why it's considered too long for your filesystem.\r\n(also note...
null
2,924
false
Loading an autonlp dataset raises in normal mode but not in streaming mode
## Describe the bug The same dataset (from autonlp) raises an error in normal mode, but does not raise in streaming mode ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("severo/autonlp-data-sentiment_detection-3c8bcd36", split="train", streaming=False) ## raises an err...
https://github.com/huggingface/datasets/issues/2923
[ "Closing since autonlp dataset are now supported" ]
null
2,923
false
Fix conversion of multidim arrays in list to arrow
Arrow only supports 1-dim arrays. Previously we were converting all the numpy arrays to python list before instantiating arrow arrays to workaround this limitation. However in #2361 we started to keep numpy arrays in order to keep their dtypes. It works when we pass any multi-dim numpy array (the conversion to arrow ...
https://github.com/huggingface/datasets/pull/2922
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2922", "html_url": "https://github.com/huggingface/datasets/pull/2922", "diff_url": "https://github.com/huggingface/datasets/pull/2922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2922.patch", "merged_at": "2021-09-15T17:21...
2,922
true
Using a list of multi-dim numpy arrays raises an error "can only convert 1-dimensional array values"
This error has been introduced in https://github.com/huggingface/datasets/pull/2361 To reproduce: ```python import numpy as np from datasets import Dataset d = Dataset.from_dict({"a": [np.zeros((2, 2))]}) ``` raises ```python Traceback (most recent call last): File "playground/ttest.py", line 5, in <mod...
https://github.com/huggingface/datasets/issues/2921
[]
null
2,921
false
Fix unwanted tqdm bar when accessing examples
A change in #2814 added bad progress bars in `map_nested`. Now they're disabled by default Fix #2919
https://github.com/huggingface/datasets/pull/2920
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2920", "html_url": "https://github.com/huggingface/datasets/pull/2920", "diff_url": "https://github.com/huggingface/datasets/pull/2920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2920.patch", "merged_at": "2021-09-15T17:18...
2,920
true
Unwanted progress bars when accessing examples
When accessing examples from a dataset formatted for pytorch, some progress bars appear when accessing examples: ```python In [1]: import datasets as ds In [2]: d = ds.Dataset.from_dict({"a": [0, 1, 2]}).with_format("torch") ...
https://github.com/huggingface/datasets/issues/2919
[ "doing a patch release now :)" ]
null
2,919
false
`Can not decode content-encoding: gzip` when loading `scitldr` dataset with streaming
## Describe the bug Trying to load the `"FullText"` config of the `"scitldr"` dataset with `streaming=True` raises an error from `aiohttp`: ```python ClientPayloadError: 400, message='Can not decode content-encoding: gzip' ``` cc @lhoestq ## Steps to reproduce the bug ```python from datasets import load_...
https://github.com/huggingface/datasets/issues/2918
[ "Hi @SBrandeis, thanks for reporting! ^^\r\n\r\nI think this is an issue with `fsspec`: https://github.com/intake/filesystem_spec/issues/389\r\n\r\nI will ask them if they are planning to fix it...", "Code to reproduce the bug: `ClientPayloadError: 400, message='Can not decode content-encoding: gzip'`\r\n```pytho...
null
2,918
false
windows download abnormal
## Describe the bug The script clearly exists (accessible from the browser), but the script download fails on windows. Then I tried it again and it can be downloaded normally on linux. why?? ## Steps to reproduce the bug ```python3.7 + windows ![image](https://user-images.githubusercontent.com/52347799/133436174-43...
https://github.com/huggingface/datasets/issues/2917
[ "Hi ! Is there some kind of proxy that is configured in your browser that gives you access to internet ? If it's the case it could explain why it doesn't work in the code, since the proxy wouldn't be used", "It is indeed an agency problem, thank you very, very much", "Let me know if you have other questions :)\...
null
2,917
false
Add OpenAI's pass@k code evaluation metric
This PR introduces the `code_eval` metric which implements [OpenAI's code evaluation harness](https://github.com/openai/human-eval) introduced in the [Codex paper](https://arxiv.org/abs/2107.03374). It is heavily based on the original implementation and just adapts the interface to follow the `predictions`/`references`...
https://github.com/huggingface/datasets/pull/2916
[ "> The implementation makes heavy use of multiprocessing which this PR does not touch. Is this conflicting with multiprocessing natively integrated in datasets?\r\n\r\nIt should work normally, but feel free to test it.\r\nThere is some documentation about using metrics in a distributed setup that uses multiprocessi...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2916", "html_url": "https://github.com/huggingface/datasets/pull/2916", "diff_url": "https://github.com/huggingface/datasets/pull/2916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2916.patch", "merged_at": "2021-11-12T14:19...
2,916
true
Fix fsspec AbstractFileSystem access
This addresses the issue from #2914 by changing the way fsspec's AbstractFileSystem is accessed.
https://github.com/huggingface/datasets/pull/2915
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2915", "html_url": "https://github.com/huggingface/datasets/pull/2915", "diff_url": "https://github.com/huggingface/datasets/pull/2915.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2915.patch", "merged_at": "2021-09-15T11:35...
2,915
true
Having a dependency defining fsspec entrypoint raises an AttributeError when importing datasets
## Describe the bug In one of my project, I defined a custom fsspec filesystem with an entrypoint. My guess is that by doing so, a variable named `spec` is created in the module `fsspec` (created by entering a for loop as there are entrypoints defined, see the loop in question [here](https://github.com/intake/filesys...
https://github.com/huggingface/datasets/issues/2914
[ "Closed by #2915." ]
null
2,914
false
timit_asr dataset only includes one text phrase
## Describe the bug The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases. ## Steps to reproduce the bug Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis...
https://github.com/huggingface/datasets/issues/2913
[ "Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)", "Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `data...
null
2,913
false
Update link to Blog in docs footer
Update link.
https://github.com/huggingface/datasets/pull/2912
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2912", "html_url": "https://github.com/huggingface/datasets/pull/2912", "diff_url": "https://github.com/huggingface/datasets/pull/2912.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2912.patch", "merged_at": "2021-09-15T07:59...
2,912
true
Fix exception chaining
Fix exception chaining to avoid tracebacks with message: `During handling of the above exception, another exception occurred:`
https://github.com/huggingface/datasets/pull/2911
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2911", "html_url": "https://github.com/huggingface/datasets/pull/2911", "diff_url": "https://github.com/huggingface/datasets/pull/2911.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2911.patch", "merged_at": "2021-09-16T15:04...
2,911
true
feat: 🎸 pass additional arguments to get private configs + info
`use_auth_token` can now be passed to the functions to get the configs or infos of private datasets on the hub
https://github.com/huggingface/datasets/pull/2910
[ "Included in https://github.com/huggingface/datasets/pull/2906" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2910", "html_url": "https://github.com/huggingface/datasets/pull/2910", "diff_url": "https://github.com/huggingface/datasets/pull/2910.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2910.patch", "merged_at": null }
2,910
true
fix anli splits
I can't run the tests for dummy data, facing this error `ImportError while loading conftest '/home/zaid/tmp/fix_anli_splits/datasets/tests/conftest.py'. tests/conftest.py:10: in <module> from datasets import config E ImportError: cannot import name 'config' from 'datasets' (unknown location)`
https://github.com/huggingface/datasets/pull/2909
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2909", "html_url": "https://github.com/huggingface/datasets/pull/2909", "diff_url": "https://github.com/huggingface/datasets/pull/2909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2909.patch", "merged_at": null }
2,909
true
Update Zenodo metadata with creator names and affiliation
This PR helps in prefilling author data when automatically generating the DOI after each release.
https://github.com/huggingface/datasets/pull/2908
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2908", "html_url": "https://github.com/huggingface/datasets/pull/2908", "diff_url": "https://github.com/huggingface/datasets/pull/2908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2908.patch", "merged_at": "2021-09-14T14:29...
2,908
true
add story_cloze dataset
@lhoestq I have spent some time but I still I can't succeed in correctly testing the dummy_data.
https://github.com/huggingface/datasets/pull/2907
[ "Will create a new one, this one seems to be missed up. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2907", "html_url": "https://github.com/huggingface/datasets/pull/2907", "diff_url": "https://github.com/huggingface/datasets/pull/2907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2907.patch", "merged_at": null }
2,907
true
feat: 🎸 add a function to get a dataset config's split names
Also: pass additional arguments (use_auth_token) to get private configs + info of private datasets on the hub Questions: - [x] I'm not sure how the versions work: I changed 1.12.1.dev0 to 1.12.1.dev1, was it correct? -> no: reverted - [x] Should I add a section in https://github.com/huggingface/datasets/blo...
https://github.com/huggingface/datasets/pull/2906
[ "> Should I add a section in https://github.com/huggingface/datasets/blob/master/docs/source/load_hub.rst? (there is no section for get_dataset_infos)\r\n\r\nYes totally :) This tutorial should indeed mention this, given how fundamental it is" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2906", "html_url": "https://github.com/huggingface/datasets/pull/2906", "diff_url": "https://github.com/huggingface/datasets/pull/2906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2906.patch", "merged_at": "2021-10-04T09:55...
2,906
true
Update BibTeX entry
Update BibTeX entry.
https://github.com/huggingface/datasets/pull/2905
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2905", "html_url": "https://github.com/huggingface/datasets/pull/2905", "diff_url": "https://github.com/huggingface/datasets/pull/2905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2905.patch", "merged_at": "2021-09-14T12:25...
2,905
true
FORCE_REDOWNLOAD does not work
## Describe the bug With GenerateMode.FORCE_REDOWNLOAD, the documentation says +------------------------------------+-----------+---------+ | | Downloads | Dataset | +====================================+===========+=========+ | `REUSE_DATASET_IF_EXISTS` (default...
https://github.com/huggingface/datasets/issues/2904
[ "Hi ! Thanks for reporting. The error seems to happen only if you use compressed files.\r\n\r\nThe second dataset is prepared in another dataset cache directory than the first - which is normal, since the source file is different. However, it doesn't uncompress the new data file because it finds the old uncompresse...
null
2,904
false
Fix xpathopen to accept positional arguments
Fix `xpathopen()` so that it also accepts positional arguments. Fix #2901.
https://github.com/huggingface/datasets/pull/2903
[ "thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2903", "html_url": "https://github.com/huggingface/datasets/pull/2903", "diff_url": "https://github.com/huggingface/datasets/pull/2903.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2903.patch", "merged_at": "2021-09-14T08:40...
2,903
true
Add WIT Dataset
## Adding a Dataset - **Name:** *WIT* - **Description:** *Wikipedia-based Image Text Dataset* - **Paper:** *[WIT: Wikipedia-based Image Text Dataset for Multimodal Multilingual Machine Learning ](https://arxiv.org/abs/2103.01913)* - **Data:** *https://github.com/google-research-datasets/wit* - **Motivation:** (e...
https://github.com/huggingface/datasets/issues/2902
[ "@hassiahk is working on it #2810 ", "WikiMedia is now hosting the pixel values directly which should make it a lot easier!\r\nThe files can be found here:\r\nhttps://techblog.wikimedia.org/2021/09/09/the-wikipedia-image-caption-matching-challenge-and-a-huge-release-of-image-data-for-research/\r\nhttps://analyti...
null
2,902
false
Incompatibility with pytest
## Describe the bug pytest complains about xpathopen / path.open("w") ## Steps to reproduce the bug Create a test file, `test.py`: ```python import datasets as ds def load_dataset(): ds.load_dataset("counter", split="train", streaming=True) ``` And launch it with pytest: ```bash python -m pyt...
https://github.com/huggingface/datasets/issues/2901
[ "Sorry, my bad... When implementing `xpathopen`, I just considered the use case in the COUNTER dataset... I'm fixing it!" ]
null
2,901
false
Fix null sequence encoding
The Sequence feature encoding was failing when a `None` sequence was used in a dataset. Fix https://github.com/huggingface/datasets/issues/2892
https://github.com/huggingface/datasets/pull/2900
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2900", "html_url": "https://github.com/huggingface/datasets/pull/2900", "diff_url": "https://github.com/huggingface/datasets/pull/2900.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2900.patch", "merged_at": "2021-09-13T14:17...
2,900
true
Dataset
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/2899
[]
null
2,899
false
Hug emoji
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/2898
[]
null
2,898
false
Add OpenAI's HumanEval dataset
This PR adds OpenAI's [HumanEval](https://github.com/openai/human-eval) dataset. The dataset consists of 164 handcrafted programming problems with solutions and unittests to verify solution. This dataset is useful to evaluate code generation models.
https://github.com/huggingface/datasets/pull/2897
[ "I just fixed the class name, and added `[More Information Needed]` in empty sections in case people want to complete the dataset card :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2897", "html_url": "https://github.com/huggingface/datasets/pull/2897", "diff_url": "https://github.com/huggingface/datasets/pull/2897.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2897.patch", "merged_at": "2021-09-16T15:02...
2,897
true
add multi-proc in `to_csv`
This PR extends the multi-proc method used in #2747 for`to_json` to `to_csv` as well. Results on my machine post benchmarking on `ascent_kb` dataset (giving ~45% improvement when compared to num_proc = 1): ``` Time taken on 1 num_proc, 10000 batch_size 674.2055702209473 Time taken on 4 num_proc, 10000 batch_siz...
https://github.com/huggingface/datasets/pull/2896
[ "I think you can just add a test `test_dataset_to_csv_multiproc` in `tests/io/test_csv.py` and we'll be good", "Hi @lhoestq, \r\nI've added `test_dataset_to_csv` apart from `test_dataset_to_csv_multiproc` as no test was there to check generated CSV file when `num_proc=1`. Please let me know if anything is also re...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2896", "html_url": "https://github.com/huggingface/datasets/pull/2896", "diff_url": "https://github.com/huggingface/datasets/pull/2896.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2896.patch", "merged_at": "2021-10-26T16:00...
2,896
true
Use pyarrow.Table.replace_schema_metadata instead of pyarrow.Table.cast
This PR partially addresses #2252. ``update_metadata_with_features`` uses ``Table.cast`` which slows down ``load_from_disk`` (and possibly other methods that use it) for very large datasets. Since ``update_metadata_with_features`` is only updating the schema metadata, it makes more sense to use ``pyarrow.Table.repla...
https://github.com/huggingface/datasets/pull/2895
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2895", "html_url": "https://github.com/huggingface/datasets/pull/2895", "diff_url": "https://github.com/huggingface/datasets/pull/2895.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2895.patch", "merged_at": "2021-09-21T08:18...
2,895
true
Fix COUNTER dataset
Fix filename generating `FileNotFoundError`. Related to #2866. CC: @severo.
https://github.com/huggingface/datasets/pull/2894
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2894", "html_url": "https://github.com/huggingface/datasets/pull/2894", "diff_url": "https://github.com/huggingface/datasets/pull/2894.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2894.patch", "merged_at": "2021-09-10T16:27...
2,894
true
add mbpp dataset
This PR adds the mbpp dataset introduced by Google [here](https://github.com/google-research/google-research/tree/master/mbpp) as mentioned in #2816. The dataset contain two versions: a full and a sanitized one. They have a slightly different schema and it is current state the loading preserves the original schema. ...
https://github.com/huggingface/datasets/pull/2893
[ "I think it's fine to have the original schema" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2893", "html_url": "https://github.com/huggingface/datasets/pull/2893", "diff_url": "https://github.com/huggingface/datasets/pull/2893.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2893.patch", "merged_at": "2021-09-16T09:35...
2,893
true
Error when encoding a dataset with None objects with a Sequence feature
There is an error when encoding a dataset with None objects with a Sequence feature To reproduce: ```python from datasets import Dataset, Features, Value, Sequence data = {"a": [[0], None]} features = Features({"a": Sequence(Value("int32"))}) dataset = Dataset.from_dict(data, features=features) ``` raises ...
https://github.com/huggingface/datasets/issues/2892
[ "This has been fixed by https://github.com/huggingface/datasets/pull/2900\r\nWe're doing a new release 1.12 today to make the fix available :)" ]
null
2,892
false
Allow dynamic first dimension for ArrayXD
Add support for dynamic first dimension for ArrayXD features. See issue [#887](https://github.com/huggingface/datasets/issues/887). Following changes allow for `to_pylist` method of `ArrayExtensionArray` to return a list of numpy arrays where fist dimension can vary. @lhoestq Could you suggest how you want to exten...
https://github.com/huggingface/datasets/pull/2891
[ "@lhoestq, thanks for your review.\r\n\r\nI added test for `to_pylist`, I didn't do that for `to_numpy` because this method shouldn't be called for dynamic dimension ArrayXD - this method will try to make a single numpy array for the whole column which cannot be done for dynamic arrays.\r\n\r\nI dig into `to_pandas...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2891", "html_url": "https://github.com/huggingface/datasets/pull/2891", "diff_url": "https://github.com/huggingface/datasets/pull/2891.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2891.patch", "merged_at": "2021-10-29T09:37...
2,891
true
0x290B112ED1280537B24Ee6C268a004994a16e6CE
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/2890
[]
null
2,890
false
Coc
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/2889
[]
null
2,889
false
v1.11.1 release date
Hello, i need to use latest features in one of my packages but there have been no new datasets release since 2 months ago. When do you plan to publush v1.11.1 release?
https://github.com/huggingface/datasets/issues/2888
[ "Hi ! Probably 1.12 on monday :)\r\n", "@albertvillanova i think this issue is still valid and should not be closed till `>1.11.0` is published :)" ]
null
2,888
false
#2837 Use cache folder for lockfile
Fixes #2837 Use a cache folder directory to store the FileLock. The issue was that the lock file was in a readonly folder.
https://github.com/huggingface/datasets/pull/2887
[ "The CI fail about the meteor metric is unrelated to this PR " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2887", "html_url": "https://github.com/huggingface/datasets/pull/2887", "diff_url": "https://github.com/huggingface/datasets/pull/2887.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2887.patch", "merged_at": "2021-10-05T17:58...
2,887
true
Hj
null
https://github.com/huggingface/datasets/issues/2886
[]
null
2,886
false
Adding an Elastic Search index to a Dataset
## Describe the bug When trying to index documents from the squad dataset, the connection to ElasticSearch seems to break: Reusing dataset squad (/Users/andreasmotz/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453) 90%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ...
https://github.com/huggingface/datasets/issues/2885
[ "Hi, is this bug deterministic in your poetry env ? I mean, does it always stop at 90% or is it random ?\r\n\r\nAlso, can you try using another version of Elasticsearch ? Maybe there's an issue with the one of you poetry env", "I face similar issue with oscar dataset on remote ealsticsearch instance. It was mainl...
null
2,885
false
Add IC, SI, ER tasks to SUPERB
This PR adds 3 additional classification tasks to SUPERB #### Intent Classification Dataset URL seems to be down at the moment :( See the note below. S3PRL source: https://github.com/s3prl/s3prl/blob/master/s3prl/downstream/fluent_commands/dataset.py Instructions: https://github.com/s3prl/s3prl/tree/master/s3prl/...
https://github.com/huggingface/datasets/pull/2884
[ "Sorry for the late PR, uploading 10+GB files to the hub through a VPN was an adventure :sweat_smile: ", "Thank you so much for adding these subsets @anton-l! \r\n\r\n> These datasets either require manual downloads or have broken/unstable links. You can get all necessary archives in this repo: https://huggingfac...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2884", "html_url": "https://github.com/huggingface/datasets/pull/2884", "diff_url": "https://github.com/huggingface/datasets/pull/2884.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2884.patch", "merged_at": "2021-09-20T09:00...
2,884
true
Fix data URLs and metadata in DocRED dataset
The host of `docred` dataset has updated the `dev` data file. This PR: - Updates the dev URL - Updates dataset metadata This PR also fixes the URL of the `train_distant` split, which was wrong. Fix #2882.
https://github.com/huggingface/datasets/pull/2883
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2883", "html_url": "https://github.com/huggingface/datasets/pull/2883", "diff_url": "https://github.com/huggingface/datasets/pull/2883.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2883.patch", "merged_at": "2021-09-13T11:24...
2,883
true
`load_dataset('docred')` results in a `NonMatchingChecksumError`
## Describe the bug I get consistent `NonMatchingChecksumError: Checksums didn't match for dataset source files` errors when trying to execute `datasets.load_dataset('docred')`. ## Steps to reproduce the bug It is quasi only this code: ```python import datasets data = datasets.load_dataset('docred') ``` ## ...
https://github.com/huggingface/datasets/issues/2882
[ "Hi @tmpr, thanks for reporting.\r\n\r\nTwo weeks ago (23th Aug), the host of the source `docred` dataset updated one of the files (`dev.json`): you can see it [here](https://drive.google.com/drive/folders/1c5-0YwnoJx8NS6CV2f-NoTHR__BdkNqw).\r\n\r\nTherefore, the checksum needs to be updated.\r\n\r\nNormally, in th...
null
2,882
false
Add BIOSSES dataset
Adding the biomedical semantic sentence similarity dataset, BIOSSES, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://github.com/huggingface/datasets/pull/2881
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2881", "html_url": "https://github.com/huggingface/datasets/pull/2881", "diff_url": "https://github.com/huggingface/datasets/pull/2881.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2881.patch", "merged_at": "2021-09-13T14:20...
2,881
true
Extend support for streaming datasets that use pathlib.Path stem/suffix
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the properties `pathlib.Path.stem` and `pathlib.Path.suffix`. Related to #2876, #2874, #2866. CC: @severo
https://github.com/huggingface/datasets/pull/2880
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2880", "html_url": "https://github.com/huggingface/datasets/pull/2880", "diff_url": "https://github.com/huggingface/datasets/pull/2880.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2880.patch", "merged_at": "2021-09-09T13:13...
2,880
true
In v1.4.1, all TIMIT train transcripts are "Would such an act of refusal be useful?"
## Describe the bug Using version 1.4.1 of `datasets`, TIMIT transcripts are all the same. ## Steps to reproduce the bug I was following this tutorial - https://huggingface.co/blog/fine-tune-wav2vec2-english But here's a distilled repro: ```python !pip install datasets==1.4.1 from datasets import load_datas...
https://github.com/huggingface/datasets/issues/2879
[ "Hi @rcgale, thanks for reporting.\r\n\r\nPlease note that this bug was fixed on `datasets` version 1.5.0: https://github.com/huggingface/datasets/commit/a23c73e526e1c30263834164f16f1fdf76722c8c#diff-f12a7a42d4673bb6c2ca5a40c92c29eb4fe3475908c84fd4ce4fad5dc2514878\r\n\r\nIf you update `datasets` version, that shoul...
null
2,879
false
NotADirectoryError: [WinError 267] During load_from_disk
## Describe the bug Trying to load saved dataset or dataset directory from Amazon S3 on a Windows machine fails. Performing the same operation succeeds on non-windows environment (AWS Sagemaker). ## Steps to reproduce the bug ```python # Followed https://huggingface.co/docs/datasets/filesystems.html#loading-a-pr...
https://github.com/huggingface/datasets/issues/2878
[]
null
2,878
false
Don't keep the dummy data folder or dataset_infos.json when resolving data files
When there's no dataset script, all the data files of a folder or a repository on the Hub are loaded as data files. There are already a few exceptions: - files starting with "." are ignored - the dataset card "README.md" is ignored - any file named "config.json" is ignored (currently it isn't used anywhere, but i...
https://github.com/huggingface/datasets/issues/2877
[ "Hi @lhoestq I am new to huggingface datasets, I would like to work on this issue!\r\n", "Thanks for the help :) \r\n\r\nAs mentioned in the PR, excluding files named \"dummy_data.zip\" is actually more general than excluding the files inside a \"dummy\" folder. I just did the change in the PR, I think we can mer...
null
2,877
false
Extend support for streaming datasets that use pathlib.Path.glob
This PR extends the support in streaming mode for datasets that use `pathlib`, by patching the method `pathlib.Path.glob`. Related to #2874, #2866. CC: @severo
https://github.com/huggingface/datasets/pull/2876
[ "I am thinking that ideally we should call `fs.glob()` instead...", "Thanks, @lhoestq: the idea of adding the mock filesystem is to avoid network calls and reduce testing time ;) \r\n\r\nI have added `rglob` as well and fixed some bugs." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2876", "html_url": "https://github.com/huggingface/datasets/pull/2876", "diff_url": "https://github.com/huggingface/datasets/pull/2876.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2876.patch", "merged_at": "2021-09-10T09:50...
2,876
true
Add Congolese Swahili speech datasets
## Adding a Dataset - **Name:** Congolese Swahili speech corpora - **Data:** https://gamayun.translatorswb.org/data/ Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). Also related: https://mobile.twitter.com/OktemAlp/status/14351963936...
https://github.com/huggingface/datasets/issues/2875
[]
null
2,875
false
Support streaming datasets that use pathlib
This PR extends the support in streaming mode for datasets that use `pathlib.Path`. Related to: #2866. CC: @severo
https://github.com/huggingface/datasets/pull/2874
[ "I've tried https://github.com/huggingface/datasets/issues/2866 again, and I get the same error.\r\n\r\n```python\r\nimport datasets as ds\r\nds.load_dataset('counter', split=\"train\", streaming=False)\r\n```", "@severo Issue #2866 is not fully fixed yet: multiple patches need to be implemented for `pathlib`, as...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2874", "html_url": "https://github.com/huggingface/datasets/pull/2874", "diff_url": "https://github.com/huggingface/datasets/pull/2874.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2874.patch", "merged_at": "2021-09-07T11:41...
2,874
true
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021" Code refactored
https://github.com/huggingface/datasets/pull/2873
[ "Hi, what's the current status of this request? It says Changes requested, but I can't see what changes?", "Hi, it looks like this PR includes changes to other files that `swedish_medical_ner`.\r\n\r\nFeel free to remove these changes, or simply create a new PR that only contains the addition of the dataset" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2873", "html_url": "https://github.com/huggingface/datasets/pull/2873", "diff_url": "https://github.com/huggingface/datasets/pull/2873.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2873.patch", "merged_at": null }
2,873
true
adding swedish_medical_ner
Adding the Swedish Medical NER dataset, listed in "Biomedical Datasets - BigScience Workshop 2021"
https://github.com/huggingface/datasets/pull/2872
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2872", "html_url": "https://github.com/huggingface/datasets/pull/2872", "diff_url": "https://github.com/huggingface/datasets/pull/2872.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2872.patch", "merged_at": null }
2,872
true
datasets.config.PYARROW_VERSION has no attribute 'major'
In the test_dataset_common.py script, line 288-289 ``` if datasets.config.PYARROW_VERSION.major < 3: packaged_datasets = [pd for pd in packaged_datasets if pd["dataset_name"] != "parquet"] ``` which throws the error below. `datasets.config.PYARROW_VERSION` itself return the string '4.0.1'. I have tested thi...
https://github.com/huggingface/datasets/issues/2871
[ "I have changed line 288 to `if int(datasets.config.PYARROW_VERSION.split(\".\")[0]) < 3:` just to get around it.", "Hi @bwang482,\r\n\r\nI'm sorry but I'm not able to reproduce your bug.\r\n\r\nPlease note that in our current master branch, we made a commit (d03223d4d64b89e76b48b00602aba5aa2f817f1e) that simulta...
null
2,871
false
Fix three typos in two files for documentation
Changed "bacth_size" to "batch_size" (2x) Changed "intsructions" to "instructions"
https://github.com/huggingface/datasets/pull/2870
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2870", "html_url": "https://github.com/huggingface/datasets/pull/2870", "diff_url": "https://github.com/huggingface/datasets/pull/2870.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2870.patch", "merged_at": "2021-09-06T08:19...
2,870
true
TypeError: 'NoneType' object is not callable
## Describe the bug TypeError: 'NoneType' object is not callable ## Steps to reproduce the bug ```python from datasets import load_dataset, load_metric dataset = datasets.load_dataset("glue", 'cola') ``` ## Expected results A clear and concise description of the expected results. ## Actual results Speci...
https://github.com/huggingface/datasets/issues/2869
[ "Hi, @Chenfei-Kang.\r\n\r\nI'm sorry, but I'm not able to reproduce your bug:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nds = load_dataset(\"glue\", 'cola')\r\nds\r\n```\r\n```\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['sentence', 'label', 'idx'],\r\n num_rows: 8551\r\n ...
null
2,869
false
Add Common Objects in 3D (CO3D)
## Adding a Dataset - **Name:** *Common Objects in 3D (CO3D)* - **Description:** *See blog post [here](https://ai.facebook.com/blog/common-objects-in-3d-dataset-for-3d-reconstruction)* - **Paper:** *[link to paper](https://arxiv.org/abs/2109.00512)* - **Data:** *[link to data](https://ai.facebook.com/datasets/co3d-...
https://github.com/huggingface/datasets/issues/2868
[]
null
2,868
false
Add CaSiNo dataset
Hi. I request you to add our dataset to the repository. This data was recently published at NAACL 2021: https://aclanthology.org/2021.naacl-main.254.pdf
https://github.com/huggingface/datasets/pull/2867
[ "Hi @lhoestq \r\n\r\nJust a request to look at the dataset. Please let me know if any changes are necessary before merging it into the repo. Thank you.", "Hey @lhoestq \r\n\r\nThanks for merging it. One question: I still cannot find the dataset on https://huggingface.co/datasets. Does it take some time or did I ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2867", "html_url": "https://github.com/huggingface/datasets/pull/2867", "diff_url": "https://github.com/huggingface/datasets/pull/2867.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2867.patch", "merged_at": "2021-09-16T09:23...
2,867
true
"counter" dataset raises an error in normal mode, but not in streaming mode
## Describe the bug `counter` dataset raises an error on `load_dataset()`, but simply returns an empty iterator in streaming mode. ## Steps to reproduce the bug ```python >>> import datasets as ds >>> a = ds.load_dataset('counter', split="train", streaming=False) Using custom data configuration default Dow...
https://github.com/huggingface/datasets/issues/2866
[ "Hi @severo, thanks for reporting.\r\n\r\nJust note that currently not all canonical datasets support streaming mode: this is one case!\r\n\r\nAll datasets that use `pathlib` joins (using `/`) instead of `os.path.join` (as in this dataset) do not support streaming mode yet.", "OK. Do you think it's possible to de...
null
2,866
false
Add MultiEURLEX dataset
**Add new MultiEURLEX Dataset** MultiEURLEX comprises 65k EU laws in 23 official EU languages (some low-ish resource). Each EU law has been annotated with EUROVOC concepts (labels) by the Publication Office of EU. As with the English EURLEX, the goal is to predict the relevant EUROVOC concepts (labels); this is mult...
https://github.com/huggingface/datasets/pull/2865
[ "Hi @lhoestq, we have this new cool multilingual dataset coming at EMNLP 2021. It would be really nice if we could have it in Hugging Face asap. Thanks! ", "Hi @lhoestq, I adopted most of your suggestions:\r\n\r\n- Dummy data files reduced, including the 2 smallest documents per subset JSONL.\r\n- README was upda...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2865", "html_url": "https://github.com/huggingface/datasets/pull/2865", "diff_url": "https://github.com/huggingface/datasets/pull/2865.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2865.patch", "merged_at": "2021-09-10T11:50...
2,865
true
Fix data URL in ToTTo dataset
Data source host changed their data URL: google-research-datasets/ToTTo@cebeb43. Fix #2860.
https://github.com/huggingface/datasets/pull/2864
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2864", "html_url": "https://github.com/huggingface/datasets/pull/2864", "diff_url": "https://github.com/huggingface/datasets/pull/2864.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2864.patch", "merged_at": "2021-09-02T06:47...
2,864
true
Update dataset URL
null
https://github.com/huggingface/datasets/pull/2863
[ "Superseded by PR #2864.\r\n\r\n@mrm8488 next time you would like to work on an issue, you can first self-assign it to you (by writing `#self-assign` in a comment on the issue). That way, other people can see you are already working on it and there are not multiple people working on the same issue. πŸ˜‰ " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2863", "html_url": "https://github.com/huggingface/datasets/pull/2863", "diff_url": "https://github.com/huggingface/datasets/pull/2863.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2863.patch", "merged_at": null }
2,863
true
fix: πŸ› be more specific when catching exceptions
The same specific exception is catched in other parts of the same function.
https://github.com/huggingface/datasets/pull/2861
[ "To give more context: after our discussion, if I understood properly, you are trying to fix a call to `datasets` that takes 15 minutes: https://github.com/huggingface/datasets-preview-backend/issues/17 Is this right?\r\n\r\n", "Yes, that's it. And to do that I'm trying to use https://pypi.org/project/stopit/, wh...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2861", "html_url": "https://github.com/huggingface/datasets/pull/2861", "diff_url": "https://github.com/huggingface/datasets/pull/2861.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2861.patch", "merged_at": null }
2,861
true
Cannot download TOTTO dataset
Error: Couldn't find file at https://storage.googleapis.com/totto/totto_data.zip `datasets version: 1.11.0` # How to reproduce: ```py from datasets import load_dataset dataset = load_dataset('totto') ```
https://github.com/huggingface/datasets/issues/2860
[ "Hola @mrm8488, thanks for reporting.\r\n\r\nApparently, the data source host changed their URL one week ago: https://github.com/google-research-datasets/ToTTo/commit/cebeb430ec2a97747e704d16a9354f7d9073ff8f\r\n\r\nI'm fixing it." ]
null
2,860
false
Loading allenai/c4 in streaming mode does too many HEAD requests
This does 60,000+ HEAD requests to get all the ETags of all the data files: ```python from datasets import load_dataset load_dataset("allenai/c4", streaming=True) ``` It makes loading the dataset completely impractical. The ETags are used to compute the config id (it must depend on the data files being used). ...
https://github.com/huggingface/datasets/issues/2859
[ "https://github.com/huggingface/datasets/blob/6c766f9115d686182d76b1b937cb27e099c45d68/src/datasets/builder.py#L179-L186", "Thanks a lot!!!" ]
null
2,859
false
Fix s3fs version in CI
The latest s3fs version has new constrains on aiobotocore, and therefore on boto3 and botocore This PR changes the constrains to avoid the new conflicts In particular it pins the version of s3fs.
https://github.com/huggingface/datasets/pull/2858
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2858", "html_url": "https://github.com/huggingface/datasets/pull/2858", "diff_url": "https://github.com/huggingface/datasets/pull/2858.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2858.patch", "merged_at": "2021-08-31T21:29...
2,858
true
Update: Openwebtext - update size
Update the size of the Openwebtext dataset I also regenerated the dataset_infos.json but the data file checksum didn't change, and the number of examples either (8013769 examples) Close #2839, close #726.
https://github.com/huggingface/datasets/pull/2857
[ "merging since the CI error in unrelated to this PR and fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2857", "html_url": "https://github.com/huggingface/datasets/pull/2857", "diff_url": "https://github.com/huggingface/datasets/pull/2857.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2857.patch", "merged_at": "2021-09-07T09:44...
2,857
true
fix: πŸ› remove URL's query string only if it's ?dl=1
A lot of URL use the query strings, for example http://opus.nlpl.eu/download.php?f=Bianet/v1/moses/en-ku.txt.zip, we must not remove it when trying to detect the protocol. We thus remove it only in the case of the query string being ?dl=1 which occurs on dropbox and dl.orangedox.com. Also: add unit tests. See ht...
https://github.com/huggingface/datasets/pull/2856
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2856", "html_url": "https://github.com/huggingface/datasets/pull/2856", "diff_url": "https://github.com/huggingface/datasets/pull/2856.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2856.patch", "merged_at": "2021-08-31T14:22...
2,856
true
Fix windows CI CondaError
From this thread: https://github.com/conda/conda/issues/6057 We can fix the conda error ``` CondaError: Cannot link a source that does not exist. C:\Users\...\Anaconda3\Scripts\conda.exe ``` by doing ```bash conda update conda ``` before doing any install in the windows CI
https://github.com/huggingface/datasets/pull/2855
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2855", "html_url": "https://github.com/huggingface/datasets/pull/2855", "diff_url": "https://github.com/huggingface/datasets/pull/2855.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2855.patch", "merged_at": "2021-08-31T13:35...
2,855
true
Fix caching when moving script
When caching the result of a `map` function, the hash that is computed depends on many properties of this function, such as all the python objects it uses, its code and also the location of this code. Using the full path of the python script for the location of the code makes the hash change if a script like `run_ml...
https://github.com/huggingface/datasets/pull/2854
[ "Merging since the CI failure is unrelated to this PR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2854", "html_url": "https://github.com/huggingface/datasets/pull/2854", "diff_url": "https://github.com/huggingface/datasets/pull/2854.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2854.patch", "merged_at": "2021-08-31T13:13...
2,854
true
Add AMI dataset
This is an initial commit for AMI dataset
https://github.com/huggingface/datasets/pull/2853
[ "Hey @cahya-wirawan, \r\n\r\nI played around with the dataset a bit and it looks already very good to me! That's exactly how it should be constructed :-) I can help you a bit with defining the config, etc... on Monday!", "@lhoestq - I think the dataset is ready to be merged :-) \r\n\r\nAt the moment, I don't real...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2853", "html_url": "https://github.com/huggingface/datasets/pull/2853", "diff_url": "https://github.com/huggingface/datasets/pull/2853.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2853.patch", "merged_at": "2021-09-29T09:19...
2,853
true
Fix: linnaeus - fix url
The url was causing a `ConnectionError` because of the "/" at the end Close https://github.com/huggingface/datasets/issues/2821
https://github.com/huggingface/datasets/pull/2852
[ "Merging since the CI error is unrelated this this PR" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2852", "html_url": "https://github.com/huggingface/datasets/pull/2852", "diff_url": "https://github.com/huggingface/datasets/pull/2852.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2852.patch", "merged_at": "2021-08-31T13:12...
2,852
true
Update `column_names` showed as `:func:` in exploring.st
Hi, One mention of `column_names` in exploring.st was showing it as `:func:` instead of `:attr:`.
https://github.com/huggingface/datasets/pull/2851
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2851", "html_url": "https://github.com/huggingface/datasets/pull/2851", "diff_url": "https://github.com/huggingface/datasets/pull/2851.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2851.patch", "merged_at": "2021-08-31T14:45...
2,851
true
Wound segmentation datasets
## Adding a Dataset - **Name:** Wound segmentation datasets - **Description:** annotated wound image dataset - **Paper:** https://www.nature.com/articles/s41598-020-78799-w - **Data:** https://github.com/uwm-bigdata/wound-segmentation - **Motivation:** Interesting simple image dataset, useful for segmentation, wi...
https://github.com/huggingface/datasets/issues/2850
[]
null
2,850
false
Add Open Catalyst Project Dataset
## Adding a Dataset - **Name:** Open Catalyst 2020 (OC20) Dataset - **Website:** https://opencatalystproject.org/ - **Data:** https://github.com/Open-Catalyst-Project/ocp/blob/master/DATASET.md Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATAS...
https://github.com/huggingface/datasets/issues/2849
[]
null
2,849
false
Update README.md
Changed 'Tain' to 'Train'.
https://github.com/huggingface/datasets/pull/2848
[ "Merging since the CI error is unrelated to this PR and fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2848", "html_url": "https://github.com/huggingface/datasets/pull/2848", "diff_url": "https://github.com/huggingface/datasets/pull/2848.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2848.patch", "merged_at": "2021-09-07T09:40...
2,848
true
fix regex to accept negative timezone
fix #2846
https://github.com/huggingface/datasets/pull/2847
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2847", "html_url": "https://github.com/huggingface/datasets/pull/2847", "diff_url": "https://github.com/huggingface/datasets/pull/2847.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2847.patch", "merged_at": "2021-09-07T09:34...
2,847
true
Negative timezone
## Describe the bug The load_dataset method do not accept a parquet file with a negative timezone, as it has the following regex: ``` "^(s|ms|us|ns),\s*tz=([a-zA-Z0-9/_+:]*)$" ``` So a valid timestap ```timestamp[us, tz=-03:00]``` returns an error when loading parquet files. ## Steps to reproduce the bug ```py...
https://github.com/huggingface/datasets/issues/2846
[ "Fixed by #2847." ]
null
2,846
false
[feature request] adding easy to remember `datasets.cache_dataset()` + `datasets.is_dataset_cached()`
Often, there is a need to prepare a dataset but not use it immediately, e.g. think tests suite setup, so it'd be really useful to be able to do: ``` if not datasets.is_dataset_cached(ds): datasets.cache_dataset(ds) ``` This can already be done with: ``` builder = load_dataset_builder(ds) if not os.path.idsi...
https://github.com/huggingface/datasets/issues/2845
[]
null
2,845
false
Fix: wikicorpus - fix keys
As mentioned in https://github.com/huggingface/datasets/issues/2552, there is a duplicate keys error in `wikicorpus`. I fixed that by taking into account the file index in the keys
https://github.com/huggingface/datasets/pull/2844
[ "The CI error is unrelated to this PR\r\n\r\n... merging !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2844", "html_url": "https://github.com/huggingface/datasets/pull/2844", "diff_url": "https://github.com/huggingface/datasets/pull/2844.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2844.patch", "merged_at": "2021-09-06T14:07...
2,844
true
Fix extraction protocol inference from urls with params
Previously it was unable to infer the compression protocol for files at URLs like ``` https://foo.bar/train.json.gz?dl=1 ``` because of the query parameters. I fixed that, this should allow 10+ datasets to work in streaming mode: ``` "discovery", "emotion", "grail_qa", "guardian_authorship", "pra...
https://github.com/huggingface/datasets/pull/2843
[ "merging since the windows error is just a CircleCI issue", "It works, eg https://observablehq.com/@huggingface/datasets-preview-backend-client#{%22datasetId%22%3A%22discovery%22} and https://datasets-preview.huggingface.tech/rows?dataset=discovery&config=discovery&split=train", "Nice !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/2843", "html_url": "https://github.com/huggingface/datasets/pull/2843", "diff_url": "https://github.com/huggingface/datasets/pull/2843.diff", "patch_url": "https://github.com/huggingface/datasets/pull/2843.patch", "merged_at": "2021-08-30T13:12...
2,843
true
always requiring the username in the dataset name when there is one
Me and now another person have been bitten by the `datasets`'s non-strictness on requiring a dataset creator's username when it's due. So both of us started with `stas/openwebtext-10k`, somewhere along the lines lost `stas/` and continued using `openwebtext-10k` and it all was good until we published the software an...
https://github.com/huggingface/datasets/issues/2842
[ "From what I can understand, you want the saved arrow file directory to have username as well instead of just dataset name if it was downloaded with the user prefix?", "I don't think the user cares of how this is done, but the 2nd command should fail, IMHO, as its dataset name is invalid:\r\n```\r\n# first run\r\...
null
2,842
false