html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Hmmm, does this mean that any person who downloads the common voice dataset will be logged as "system@huggingface.co"? If so, it would defeat the purpose of sending the user's email to the commonvoice API, right? | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 35 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Hmmm, does this mean that any person who downloads the common voice dataset will be logged as "system@huggingface.co"? If so, it would defeat the purpose of sending the user's email to the commonvoice API... | [
-0.1696701944,
0.1444740295,
0.0457174219,
0.2471178919,
-0.0200229455,
0.3168714345,
0.5003870726,
0.0581495129,
0.3065966666,
-0.0658347681,
-0.356238693,
-0.1942665726,
-0.0795760378,
-0.109813571,
0.2577853203,
0.2352219075,
-0.012772751,
0.1207207665,
0.3894254565,
-0.3368... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.
Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody e... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 61 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
I agree with @severo: we cannot set our system email as default, allowing anybody not authenticated to by-pass the Common Voice usage policy.
Additionally, looking at the code, I think we should implem... | [
-0.4233382344,
0.2953563631,
0.0905707181,
-0.0621563978,
0.099763453,
0.1812495142,
0.7674459815,
0.2327231616,
0.2935930789,
0.1703319103,
-0.0979548171,
0.0021814017,
-0.0064492924,
-0.1200801954,
-0.0111923395,
0.2903728783,
-0.2454335988,
0.1893628985,
0.1547878832,
-0.187... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | Hmm I don't agree here.
Anybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the "correct" email but to just whatever and it would work.
Note that someone only has visibility on the code after havin... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 111 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
Hmm I don't agree here.
Anybody can always just bypass the system by setting whatever email. As soon as someone has access to the downloading script it's trivial to tweak the code to not send the "cor... | [
-0.3276700377,
0.2826412618,
0.0286960993,
0.1016934812,
-0.0353756994,
0.0222149193,
0.6439489722,
0.076558128,
0.3533987701,
0.2049578875,
-0.250772357,
-0.1054484919,
-0.0896840245,
-0.0457500294,
0.1518868357,
0.3686565459,
-0.2056715786,
0.1514365524,
0.1459309459,
-0.1047... |
https://github.com/huggingface/datasets/issues/4237 | Common Voice 8 doesn't show datasets viewer | > Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.
Yes, I agree we can forget about this @patrickvonplaten. After having had a look at Common Voice website, I've seen they ... | https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0 | 136 | Common Voice 8 doesn't show datasets viewer
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
> Additionally, looking at the code, I think we should implement a more robust way to send user email to Common Voice: currently anybody can tweak the script and send somebody else email instead.
Yes, ... | [
-0.4378187358,
0.2820081115,
-0.0171392243,
-0.0312947929,
0.0072664716,
0.0896093771,
0.5135827661,
0.3128056526,
0.1830434948,
0.1410139501,
-0.3661122322,
-0.1130832881,
-0.072842598,
-0.0658885613,
0.1129050553,
0.0829916894,
-0.1138803065,
0.2530531585,
0.3841428459,
-0.25... |
https://github.com/huggingface/datasets/issues/4230 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data? | Thanks for reporting @beyondguo.
Indeed, we generate this dataset from this raw data file URL: https://data.deepai.org/conll2003.zip
And that URL only contains the English version. | 
But on huggingface datasets:

Where is the German data? | 24 | Why the `conll2003` dataset on huggingface only contains the `en` subset? Where is the German data?

But on huggingface datasets:
,
"key2_in_dict": datasets.Value("int32"),
...
}
],
```
Feel free to re-open thi... | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | 48 | Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance.
Hi @jordiae,
... | [
-0.046249602,
-0.4512825608,
-0.170710057,
0.1658608615,
0.093263872,
-0.0161834396,
0.1318405122,
0.1520017833,
0.4049992263,
0.1299023777,
0.1432440877,
0.223382771,
-0.070746623,
0.7212549448,
-0.1806515455,
-0.2812748551,
-0.0481466874,
0.1059780419,
0.0737303942,
0.0739523... |
https://github.com/huggingface/datasets/issues/4221 | Dictionary Feature | > Hi @jordiae,
>
> Instead of the `Sequence` feature, you can use just a regular list: put the dict between `[` and `]`:
>
> ```python
> "list_of_dict_feature": [
> {
> "key1_in_dict": datasets.Value("string"),
> "key2_in_dict": datasets.Value("int32"),
> ...
> }
> ],
> ```... | Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance. | 65 | Dictionary Feature
Hi, I'm trying to create the loading script for a dataset in which one feature is a list of dictionaries, which afaik doesn't fit very well the values and structures supported by Value and Sequence. Is there any suggested workaround, am I missing something?
Thank you in advance.
> Hi @jordiae,... | [
-0.0429693162,
-0.4704377353,
-0.1678056717,
0.1911447197,
0.0661601573,
-0.0045956159,
0.1355632991,
0.1641891599,
0.4212374985,
0.1347170472,
0.1320910752,
0.2042677552,
-0.085127905,
0.7377755642,
-0.1715188473,
-0.2838612199,
-0.0400621369,
0.0985410661,
0.0720809624,
0.065... |
https://github.com/huggingface/datasets/issues/4217 | Big_Patent dataset broken | Thanks for reporting. The issue seems not to be directly related to the dataset viewer or the `datasets` library, but instead to it being hosted on Google Drive.
See related issues: https://github.com/huggingface/datasets/issues?q=is%3Aissue+is%3Aopen+drive.google.com
To quote [@lhoestq](https://github.com/huggin... | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| 62 | Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? N... | [
-0.4834668934,
0.104641214,
-0.0131414868,
0.3420283496,
0.1577983648,
0.0383427329,
0.1688533425,
0.2354457974,
0.2180176079,
-0.1323463321,
-0.0315063484,
-0.1041555107,
-0.2722391188,
0.4202748537,
0.2981956005,
0.1469314247,
0.1277819872,
0.017385602,
-0.0924532115,
0.14070... |
https://github.com/huggingface/datasets/issues/4217 | Big_Patent dataset broken | We should find out if the dataset license allows redistribution and contact the data owners to propose them to host their data on our Hub. | ## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? No
| 25 | Big_Patent dataset broken
## Dataset viewer issue for '*big_patent*'
**Link:** *[link to the dataset viewer page](https://huggingface.co/datasets/big_patent/viewer/all/train)*
*Unable to view because it says FileNotFound, also cannot download it through the python API*
Am I the one who added this dataset ? N... | [
-0.2477789819,
0.0562162139,
-0.0191622525,
0.4259560406,
0.0408797935,
-0.0157361049,
0.1593547463,
0.2846546471,
0.2502851188,
-0.0345466584,
-0.1826789826,
0.0505646504,
-0.3296605051,
0.306931138,
0.270731926,
0.2027589083,
0.1753144562,
-0.0609313212,
-0.0583923906,
-0.033... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi @pietrolesci, thanks for reporting.
Please note that this is a design purpose: a `DatasetDict` has the same features for all its datasets. Normally, a `DatasetDict` is composed of several sub-datasets each corresponding to a different **split**.
To handle sub-datasets with different features, we use another ap... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 69 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi @albertvillanova,
Thanks a lot for your reply! I got it now. The strange thing for me was to have it correctly working (i.e., DatasetDict with different features in some datasets) locally and not on the Hub. It would be great to have configuration supported by `push_to_hub`. Personally, this latter functionality ... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 68 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | Hi! Yes, we should override `DatasetDict.__setitem__` and throw an error if features dictionaries are different. `DatasetDict` is a subclass of `dict`, so `DatasetDict.{update/setdefault}` need to be overridden as well. We could avoid this by subclassing `UserDict`, but then we would get the name collision - `DatasetDi... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 102 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4211 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features | I would keep things simple and keep subclassing dict. Regarding the features check, I guess this can be done only for `push_to_hub` right ? It is the only function right now that requires the underlying datasets to be splits (e.g. train/test) and have the same features.
Note that later you will be able to push datas... | Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual features but if I `push_to_hub` and then `load_dataset`, the features are all the same.
Dataset and code... | 76 | DatasetDict containing Datasets with different features when pushed to hub gets remapped features
Hi there,
I am trying to load a dataset to the Hub. This dataset is a `DatasetDict` composed of various splits. Some splits have a different `Feature` mapping. Locally, the DatasetDict preserves the individual feature... | [
0.0950680673,
-0.5818799138,
-0.0149743548,
0.3464848101,
-0.0062353625,
-0.086014092,
0.3061930239,
0.1616925895,
0.3734932244,
-0.0174942464,
-0.2923354506,
0.6108020544,
0.1393585652,
0.3625233173,
0.0080397101,
0.1103166118,
0.2680215836,
0.1151718944,
-0.0468163006,
-0.322... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | Hi! Casting class labels from strings is currently not supported in the CSV loader, but you can get the same result with an additional map as follows:
```python
from datasets import load_dataset,Features,Value,ClassLabel
class_names = ["cmn","deu","rus","fra","eng","jpn","spa","ita","kor","vie","nld","epo","por","tu... | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 134 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | @albertvillanova @mariosasko thank you, with that change now I get
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-9-eeb68eeb9bec>](https://localhost:8080/#) in <module>()
11 )
12 ... | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 187 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | @mariosasko changed it like
```python
sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
```
to avoid the above errorr. | ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 26 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4210 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe' | Any update on this? Is this correct ?
> @mariosasko changed it like
>
> ```python
> sentences = sentences.map(lambda ex: {"label" : features["label"].str2int(ex["label"]) if ex["label"] is not None else None}, features=features)
> ```
>
> to avoid the above errorr.
| ### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed ... | 41 | TypeError: Cannot cast array data from dtype('O') to dtype('int64') according to the rule 'safe'
### System Info
```shell
- `transformers` version: 4.18.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.10.0+cu111... | [
-0.2058740258,
-0.5463367105,
-0.1671127826,
0.1710505188,
0.5628308058,
-0.0438255742,
0.3648130894,
0.4633278251,
0.3307608366,
0.1103612408,
-0.0379637405,
0.1857534945,
-0.1733210981,
0.0352142192,
-0.091168195,
-0.2792181075,
-0.0311877429,
0.0663471445,
-0.4036820829,
-0.... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Hi ! Maybe one of the objects in the function is not deterministic across sessions ? You can read more about it and how to investigate here: https://huggingface.co/docs/datasets/about_cache | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 28 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Hi @apsdehal! Can you verify that replacing
```python
def fetch_single_image(image_url, timeout=None, retries=0):
for _ in range(retries + 1):
try:
request = urllib.request.Request(
image_url,
data=None,
headers={"user-agent": get_datasets_... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 88 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Thanks @mariosasko. That does fix the issue. In general, I think these image downloading utilities since they are being used by a lot of image dataset should be provided as a part of `datasets` library right to keep the logic consistent and READMEs smaller? If they already exists, that is also great, please point me to... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 63 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | You can find my rationale (and a proposed solution) for why these utilities are not a part of `datasets` here: https://github.com/huggingface/datasets/pull/4100#issuecomment-1097994003. | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 21 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4199 | Cache miss during reload for datasets using image fetch utilities through map | Makes sense. But, I think as the number of image datasets as grow, more people are copying pasting original code from docs to work as it is while we make fixes to them later. I think we do need a central place for these to avoid that confusion as well as more easier access to image datasets. Should we restart that disc... | ## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, when you exit the interpretor and reload it, the downloading starts from scratch.
... | 65 | Cache miss during reload for datasets using image fetch utilities through map
## Describe the bug
It looks like that result of `.map` operation dataset are missing the cache when you reload the script and always run from scratch. In same interpretor session, they are able to find the cache and reload it. But, whe... | [
-0.5388091803,
-0.1422931403,
-0.0625115633,
0.2576912344,
0.0660191029,
0.0496222414,
0.1246499345,
-0.0102835521,
0.485123843,
0.1197426617,
-0.0498226732,
0.4438686967,
0.3680506647,
-0.1765505075,
-0.1030043662,
0.1202055439,
-0.026852401,
0.1637635529,
-0.182844162,
-0.160... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | Hi! :)
I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` isn't j... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 46 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | Hi @ahf876828330,
As @stevhliu pointed out, the proper way to load a dataset is not trying to load its metadata file.
In your case, as the dataset script is local, you should better point to your local loading script:
```python
dataset = load_dataset("dataset/opus_books.py")
```
Please, feel free to re-ope... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 61 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | > Hi! :)
>
> I believe that should work unless `dataset_infos.json` isn't actually a dataset. For Hugging Face datasets, there is usually a file named `dataset_infos.json` which contains metadata about the dataset (eg. the dataset citation, license, description, etc). Can you double-check that `dataset_infos.json` i... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 77 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4192 | load_dataset can't load local dataset,Unable to find ... | The metadata file isn't a dataset so you can't turn it into one. You should try @albertvillanova's code snippet above (now merged in the docs [here](https://huggingface.co/docs/datasets/master/en/loading#local-loading-script)), which uses your local loading script `opus_books.py` to:
1. Download the actual dataset. ... |
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/datasets/load.py", line 1675, in load_dataset
**config_kwa... | 51 | load_dataset can't load local dataset,Unable to find ...
Traceback (most recent call last):
File "/home/gs603/ahf/pretrained/model.py", line 48, in <module>
dataset = load_dataset("json",data_files="dataset/dataset_infos.json")
File "/home/gs603/miniconda3/envs/coderepair/lib/python3.7/site-packages/data... | [
-0.1754118204,
0.0892572477,
-0.1238672063,
0.1317975819,
0.3044528067,
0.059617009,
0.4345732033,
0.4015061557,
0.2219167948,
0.1424890012,
0.0361839049,
0.2007317394,
-0.0687808245,
0.2588721216,
0.009246666,
0.0185479969,
-0.1723025739,
0.2523019314,
0.073971346,
-0.17420156... |
https://github.com/huggingface/datasets/issues/4191 | feat: create an `Array3D` column from a list of arrays of dimension 2 | Hi @SaulLu, thanks for your proposal.
Just I got a bit confused about the dimensions...
- For the 2D case, you mention it is possible to create an `Array2D` from a list of arrays of dimension 1
- However, you give an example of creating an `Array2D` from arrays of dimension 2:
- the values of `data_map` are arr... | **Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1.
To illustrate my proposal, let's take the... | 255 | feat: create an `Array3D` column from a list of arrays of dimension 2
**Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of list... | [
0.3124177456,
-0.2744894028,
-0.1943417341,
0.1792210191,
0.1064045355,
0.2191018909,
0.752160728,
0.3646605909,
0.3426304162,
0.0763215721,
0.2014093101,
0.0522667319,
-0.2193995863,
0.5241888165,
0.1322811246,
-0.5646165013,
0.2458340824,
0.2844041884,
-0.1231604069,
0.149668... |
https://github.com/huggingface/datasets/issues/4191 | feat: create an `Array3D` column from a list of arrays of dimension 2 | Hi @albertvillanova ,
Indeed my message was confusing and you guessed right :smile: : I think would be interesting to be able to create an Array3D from a list of an array of dimension 2.
For the 2D case I should have given as a "similar" example:
```python
data_map_1D = {
1: np.array([0.2, 0.4]),
2... | **Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of lists of arrays of dimension 1.
To illustrate my proposal, let's take the... | 81 | feat: create an `Array3D` column from a list of arrays of dimension 2
**Is your feature request related to a problem? Please describe.**
It is possible to create an `Array2D` column from a list of arrays of dimension 1. Similarly, I think it might be nice to be able to create a `Array3D` column from a list of list... | [
0.3124177456,
-0.2744894028,
-0.1943417341,
0.1792210191,
0.1064045355,
0.2191018909,
0.752160728,
0.3646605909,
0.3426304162,
0.0763215721,
0.2014093101,
0.0522667319,
-0.2193995863,
0.5241888165,
0.1322811246,
-0.5646165013,
0.2458340824,
0.2844041884,
-0.1231604069,
0.149668... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .
You're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is done on the fly, which is also why one should **not** do `ds["a... | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 61 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | So, again to clarify: On disk, only the raw flac file content is stored? Is this also the case after `save_to_disk`?
And is it simple to also store it re-encoded as ogg or mp3 instead?
| https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 35 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | Hey,
Sorry yeah I was just about to look into this! We actually had an outdated version of Librispeech ASR that didn't save any files, but instead converted the audio files to a byte string, then was then decoded on-the-fly. This however is not very user-friendly so we recently decided to instead show the full path... | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 119 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | > I don't think it's a good idea to convert to MP3 out-of-the-box, but we could maybe think about some kind of convert function for audio datasets cc @lhoestq ?
Sure, I would expect that `load_dataset("librispeech_asr")` would give you the original (not re-encoded) data (flac or already decoded). So such re-encoding... | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 67 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | A follow-up question: I wonder whether a Parquet dataset is maybe more what we actually want to have? (Following also my comment here: https://github.com/huggingface/datasets/pull/4184#issuecomment-1105045491.) Because I think we actually would prefer to embed the data content in the dataset.
So, instead of `save_to... | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 64 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4185 | Librispeech documentation, clarification on format | `save_to_disk` saves the dataset as an Arrow file, which is the format we use to load a dataset using memory mapping. This way the dataset does not fill your RAM, but is read from your disk instead.
Therefore you can directly reload a dataset saved with `save_to_disk` using `load_from_disk`.
Parquet files are use... | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | 107 | Librispeech documentation, clarification on format
https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is n... | [
-0.1542334408,
-0.3340955675,
-0.1759803146,
0.2739016414,
0.216354534,
-0.2398924828,
0.4355738759,
0.1572939605,
0.2130361646,
0.0300804526,
-0.556984067,
0.2139859498,
-0.2302835286,
0.085956037,
0.0961477235,
0.0931838453,
0.0818833336,
0.2941011786,
-0.4400452077,
-0.17477... |
https://github.com/huggingface/datasets/issues/4182 | Zenodo.org download is not responding | Hi @dkajtoch, please note that at HuggingFace we are not hosting this dataset: we are just using a script to download their data file and create a dataset from it.
It was the dataset owners decision to host their data at Zenodo. You can see this on their website: https://marcobaroni.org/composes/sick.html
And yes... | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original ... | 94 | Zenodo.org download is not responding
## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually ... | [
-0.0107859969,
-0.1561817527,
-0.1249701902,
0.3887094557,
0.4191482961,
-0.1206511408,
0.2301652282,
0.2020222545,
-0.0820400119,
0.0954474658,
-0.6322540045,
-0.1440723985,
0.2485608906,
0.0900175571,
0.030166056,
-0.0484910905,
-0.0453971438,
0.0510842651,
-0.2552132607,
-0.... |
https://github.com/huggingface/datasets/issues/4182 | Zenodo.org download is not responding | Thanks @albertvillanova. I know that the problem lies in the source data. I just wanted to point out that these kind of problems are unavoidable without having one place where data sources are cached. Websites may go down or data sources may move. Having a copy in Hugging Face Hub would be a great solution. | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original ... | 55 | Zenodo.org download is not responding
## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually ... | [
-0.080918774,
0.0952439606,
-0.1096673757,
0.2967681587,
0.4041541815,
-0.1066377908,
0.1737032235,
0.2406035513,
-0.2892254293,
0.0863700435,
-0.5147069097,
-0.1286249906,
0.4303603768,
-0.1244517639,
-0.0094488226,
0.0154587887,
-0.0607607476,
0.1537501961,
-0.1967417449,
-0.... |
https://github.com/huggingface/datasets/issues/4182 | Zenodo.org download is not responding | Definitely, @dkajtoch! But we have to ask permission to the data owners. And many dataset licenses directly forbid data redistribution: in those cases we are not allowed to host their data on our Hub. | ## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually use more reliable way to store original ... | 34 | Zenodo.org download is not responding
## Describe the bug
Source download_url from zenodo.org does not respond.
`_DOWNLOAD_URL = "https://zenodo.org/record/2787612/files/SICK.zip?download=1"`
Other datasets also use zenodo.org to store data and they cannot be downloaded as well.
It would be better to actually ... | [
-0.1527274996,
0.0047500045,
-0.0980382189,
0.3225700557,
0.4120680392,
-0.0858239457,
0.1952845007,
0.3025969267,
-0.1455080807,
0.1067048386,
-0.6057358384,
0.118277587,
0.2712568343,
-0.0177002829,
-0.0803629681,
-0.0069501335,
-0.0406547561,
0.0712229013,
-0.2342373878,
-0.... |
https://github.com/huggingface/datasets/issues/4181 | FLEURS | Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.
That's because `download_and_extract` doesn't support TAR archives in streaming mode. | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | 20 | FLEURS
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implement... | [
-0.4216788113,
-0.0958748609,
-0.0218318552,
0.4817814529,
0.0592254549,
-0.0917436033,
0.0025128934,
0.300100714,
0.2116694897,
0.3248205483,
0.0401249714,
0.2440659106,
-0.098210901,
0.1955389529,
0.1705656797,
-0.2471702546,
-0.1001694649,
0.233184129,
-0.1934710443,
0.01725... |
https://github.com/huggingface/datasets/issues/4181 | FLEURS | Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check:
https://huggingface.co/datasets/google/fleurs/commit/dcf80160cd77977490a8d32b370c027107f2407b
real quick.
I think the problem is that we cannot ensure that the metadata file is found before th... | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | 47 | FLEURS
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implement... | [
-0.4105003774,
-0.0333836749,
-0.0058600032,
0.3899718523,
0.0708154961,
-0.2282118499,
0.0011789129,
0.0690635741,
-0.1195237041,
0.3359758556,
-0.0833670497,
0.3761875629,
-0.2372799069,
-0.040668752,
0.1463757306,
-0.1828031987,
-0.1151861027,
0.2371740937,
-0.1535304189,
0.... |
https://github.com/huggingface/datasets/issues/4181 | FLEURS | @patrickvonplaten I think the metadata file should be found first because the audio files are contained in a folder next to the metadata files (just as in common voice), so the metadata files should be "on top of the list" as they are closer to the root in the directories hierarchy | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | 51 | FLEURS
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implement... | [
-0.2602010965,
-0.0378421322,
0.0043258998,
0.4519119561,
-0.022629505,
-0.183288902,
0.0751354322,
0.0605066046,
-0.0186401047,
0.282525599,
-0.1181558892,
0.3060654402,
-0.1763246953,
-0.1571860164,
0.1485980302,
-0.09471035,
-0.1085318923,
0.2934680283,
-0.1141458452,
-0.116... |
https://github.com/huggingface/datasets/issues/4181 | FLEURS | The order of the files is determined when the TAR archive is created, depending on the commands the creator ran.
If the metadata file is not at the beginning of the file, that makes streaming completely inefficient. In this case the TAR archive needs to be recreated in an appropriate order. | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | 51 | FLEURS
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implement... | [
-0.2956910729,
-0.2175756842,
0.014172053,
0.533467114,
-0.0148763657,
-0.1429132521,
-0.0185219832,
0.0830985829,
0.0446124487,
0.2024894506,
0.0721557215,
0.2828602493,
-0.0487505905,
-0.0220458973,
0.1474587619,
-0.1707114577,
-0.1520995647,
0.2791154683,
-0.1519186348,
-0.0... |
https://github.com/huggingface/datasets/issues/4181 | FLEURS | Actually we could maybe just host the metadata file ourselves and then stream the audio data only. Don't think that this would be a problem for the FLEURS authors (I can ask them :-)) | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | 34 | FLEURS
## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implement... | [
-0.2880881429,
0.0965603814,
0.0083137583,
0.4689753354,
-0.0050418801,
-0.2420444489,
0.1928471178,
0.0732811168,
-0.093463026,
0.3695701063,
-0.129846409,
0.3443502188,
-0.2798325419,
-0.0799771622,
0.0928558931,
-0.0724117979,
-0.1600487083,
0.243618086,
-0.1604785919,
-0.05... |
https://github.com/huggingface/datasets/issues/4180 | Add some iteration method on a dataset column (specific for inference) | Thanks for the suggestion ! I agree it would be nice to have something directly in `datasets` to do something as simple as that
cc @albertvillanova @mariosasko @polinaeterna What do you think if we have something similar to pandas `Series` that wouldn't bring everything in memory when doing `dataset["audio"]` ? Curr... | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | 111 | Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset... | [
-0.3390596509,
-0.1144976988,
-0.1280233264,
0.0584006421,
0.2029085755,
-0.1514475197,
0.2577140331,
0.0180890877,
-0.0479570664,
0.3820746839,
-0.0605078973,
0.3480749428,
-0.2523988485,
0.0022309164,
0.1120157912,
-0.2644658983,
-0.0016953528,
0.2691255212,
-0.3120146692,
-0... |
https://github.com/huggingface/datasets/issues/4180 | Add some iteration method on a dataset column (specific for inference) | I agree that current behavior (decoding all audio file sin the dataset when accessing `dataset["audio"]`) is not useful, IMHO. Indeed in our docs, we are constantly warning our collaborators not to do that.
Therefore I upvote for a "useful" behavior of `dataset["audio"]`. I don't think the breaking change is importa... | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | 97 | Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset... | [
-0.3283700943,
-0.1157748625,
-0.066133894,
0.0635911077,
0.1763314456,
-0.2033714354,
0.262514025,
-0.0213142615,
-0.0680218264,
0.4297318757,
-0.1349968314,
0.3932086825,
-0.2496891022,
0.0722303316,
0.0621931702,
-0.1902333796,
-0.0295237079,
0.3479429781,
-0.2762725949,
-0.... |
https://github.com/huggingface/datasets/issues/4180 | Add some iteration method on a dataset column (specific for inference) | I recall I had the same idea while working on the `Image` feature, so I agree implementing something similar to `pd.Series` that lazily brings elements in memory would be beneficial. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | 30 | Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset... | [
-0.3346259296,
-0.1992710233,
-0.1157855019,
0.0421663523,
0.1764318496,
-0.1835933924,
0.2358644754,
0.0146749625,
-0.0409148298,
0.4126093388,
-0.050591588,
0.3249539733,
-0.2580443621,
0.0624595471,
0.0825331062,
-0.2795501947,
-0.0373106264,
0.3498583138,
-0.2074298859,
-0.... |
https://github.com/huggingface/datasets/issues/4180 | Add some iteration method on a dataset column (specific for inference) | @lhoestq @mariosasko Could you please give a link to that new feature of `pandas.Series`? As far as I remember since I worked with pandas for more than 6 years, there was no lazy in-memory feature; it was everything in-memory; that was the reason why other frameworks were created, like Vaex or Dask, e.g. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | 53 | Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset... | [
-0.3586827517,
-0.1708164811,
-0.1054126993,
0.0674949959,
0.1956336051,
-0.0567914844,
0.2455618232,
0.0106570693,
-0.0498886667,
0.4214864969,
-0.0571998842,
0.2929975688,
-0.1949506402,
0.0127486978,
-0.0027335205,
-0.2876277864,
0.0266217198,
0.2824605107,
-0.329136014,
-0.... |
https://github.com/huggingface/datasets/issues/4180 | Add some iteration method on a dataset column (specific for inference) | Yea pandas doesn't do lazy loading. I was referring to pandas.Series to say that they have a dedicated class to represent a column ;) | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset.
Having an iterator (or sequence) type of object, would make inference ... | 24 | Add some iteration method on a dataset column (specific for inference)
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
Currently, `dataset["audio"]` will load EVERY element in the dataset in RAM, which can be quite big for an audio dataset... | [
-0.3311902583,
-0.1377672255,
-0.0978266373,
0.0600640811,
0.195251286,
-0.1097995713,
0.3782247603,
-0.0013356889,
0.0039669308,
0.3533789814,
-0.1139354706,
0.3851184249,
-0.1974339932,
0.0876601338,
0.0624893047,
-0.2848600745,
-0.032793086,
0.3595253825,
-0.3034151196,
-0.0... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Another thing, but maybe this should be a separate issue: As I see from the code, it would try to use up to 16 simultaneous downloads? This is problematic for Librispeech or anything on OpenSLR. On [the homepage](https://www.openslr.org/), it says:
> If you want to download things from this site, please download the... | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 101 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Sorry maybe the docs haven't been super clear here. By `split` we mean one of `train.500`, `train.360`, `train.100`, `validation`, `test`. For Librispeech, you'll have to specific a config (either `other` or `clean`) though:
```py
datasets.load_dataset("librispeech_asr", "clean")
```
should work and give you al... | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 55 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | If you need both `"clean"` and `"other"` I think you'll have to do concatenate them as follows:
```py
from datasets import concatenate_datasets, load_dataset
other = load_dataset("librispeech_asr", "other")
clean = load_dataset("librispeech_asr", "clean")
librispeech = concatenate_datasets([other, clean])
... | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 38 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Downloading one split would be:
```py
from datasets import load_dataset
other = load_dataset("librispeech_asr", "other", split="train.500")
```
| ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 16 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Ah thanks. But wouldn't it be easier/nicer (and more canonical) to just make it in a way that simply `load_dataset("librispeech_asr")` works? | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 21 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Pinging @lhoestq here, think this could make sense! Not sure however how the dictionary would then look like | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 18 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Would it make sense to have `clean` as the default config ?
Also I think `load_dataset("librispeech_asr")` should have raised you an error that says that you need to specify a config
I also opened a PR to improve the doc: https://github.com/huggingface/datasets/pull/4183 | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 41 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | > Would it make sense to have `clean` as the default config ?
I think a user would expect that the default would give you the full dataset.
> Also I think `load_dataset("librispeech_asr")` should have raised you an error that says that you need to specify a config
It does raise an error, but this error confuse... | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 86 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | +1 for @albertz. Also think lots of people download the whole dataset (`"clean"` + `"other"`) for Librispeech.
Think there are also some people though who:
- a) Don't have the memory to store the whole dataset
- b) Just want to evaluate on one of the two configs | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 48 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Ok ! Adding the "all" configuration would do the job then, thanks ! In the "all" configuration we can merge all the train.xxx splits into one "train" split, or keep them separate depending on what's the most practical to use (probably put everything in "train" no ?) | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 47 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | I'm not too familiar with how to work with HuggingFace datasets, but people often do some curriculum learning scheme, where they start with train.100, later go over to train.100 + train.360, and then later use the whole train (960h). It would be good if this is easily possible.
| ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 48 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | Hey @albertz,
opened a PR here. Think by adding the "subdataset" class to each split "train", "dev", "other" as shown here: https://github.com/huggingface/datasets/pull/4184/files#r853272727 it should be easily possible (e.g. with the filter function https://huggingface.co/docs/datasets/v2.1.0/en/package_reference/... | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 34 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4179 | Dataset librispeech_asr fails to load | But also since everything is cached one could also just do:
```python
load_dataset("librispeech", "clean", "train.100")
load_dataset("librispeech", "clean", "train.100+train.360")
load_dataset("librispeech" "all", "train")
``` | ## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc](https://huggingface.co/datasets/libris... | 22 | Dataset librispeech_asr fails to load
## Describe the bug
The dataset librispeech_asr (standard Librispeech) fails to load.
## Steps to reproduce the bug
```python
datasets.load_dataset("librispeech_asr")
```
## Expected results
It should download and prepare the whole dataset (all subsets).
In [the doc... | [
-0.2864782512,
-0.4839607477,
0.0680988878,
0.6340244412,
0.1525813788,
0.2355626971,
0.1461004764,
0.2371000201,
0.2123608291,
-0.0533141494,
-0.1300386786,
0.0948216319,
-0.3211013675,
0.2411901802,
0.0585025474,
-0.0512975715,
-0.1185007244,
0.2319578677,
-0.1747397184,
-0.1... |
https://github.com/huggingface/datasets/issues/4169 | Timit_asr dataset cannot be previewed recently | Thanks for reporting. The bug has already been detected, and we hope to fix it soon. | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | 16 | Timit_asr dataset cannot be previewed recently
## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No
Thanks for reporting. The bug has already been detected,... | [
-0.4550265968,
-0.3681927919,
-0.062022347,
0.1747279167,
0.0391848311,
0.0492948182,
0.1823962927,
0.2734493017,
-0.3722573519,
0.2060218006,
-0.3304496109,
0.1851599962,
-0.1779927015,
0.1633446366,
0.0846916661,
-0.1212468445,
0.1103404984,
0.0023505038,
-0.4338065684,
0.100... |
https://github.com/huggingface/datasets/issues/4169 | Timit_asr dataset cannot be previewed recently | TIMIT is now a dataset that requires manual download, see #4145
Therefore it might take a bit more time to fix it | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | 22 | Timit_asr dataset cannot be previewed recently
## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No
TIMIT is now a dataset that requires manual download, se... | [
-0.4907503128,
-0.3528876305,
-0.0921196267,
0.1158469245,
0.0223264229,
0.0131127303,
0.1624344736,
0.2413311154,
-0.4454859793,
0.1590494961,
-0.2978596687,
0.1918077618,
-0.1897066534,
0.2122343779,
0.0270437244,
-0.1452159584,
0.0503632352,
0.0330564231,
-0.5349831581,
0.14... |
https://github.com/huggingface/datasets/issues/4169 | Timit_asr dataset cannot be previewed recently | > TIMIT is now a dataset that requires manual download, see #4145
>
> Therefore it might take a bit more time to fix it
Thank you for your quickly response. Exactly, I also found the manual download issue in the morning. But when I used *list_datasets()* to check the available datasets, *'timit_asr'* is still in ... | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | 86 | Timit_asr dataset cannot be previewed recently
## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No
> TIMIT is now a dataset that requires manual download, ... | [
-0.2399716973,
-0.3775832951,
-0.1303032339,
0.189071089,
0.0118970713,
0.1102470085,
0.1603164226,
0.0570888817,
-0.3138604462,
0.0344567224,
-0.1932356656,
0.1399660408,
-0.2343464792,
0.0799185261,
0.1763837188,
0.0130789317,
0.1113243029,
-0.051803112,
-0.3212527931,
-0.034... |
https://github.com/huggingface/datasets/issues/4169 | Timit_asr dataset cannot be previewed recently | Yes exactly. If you try to load the dataset it will ask you to download it manually first, and to pass the downloaded and extracted data like `load_dataset("timir_asr", data_dir="path/to/extracted/data")`
The URL we were using was coming from a host that doesn't have the permission to redistribute the data, and the ... | ## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No | 57 | Timit_asr dataset cannot be previewed recently
## Dataset viewer issue for '*timit_asr*'
**Link:** *https://huggingface.co/datasets/timit_asr*
Issue: The timit-asr dataset cannot be previewed recently.
Am I the one who added this dataset ? Yes-No
No
Yes exactly. If you try to load the dataset it will ask ... | [
-0.314630121,
-0.2493136823,
-0.0443634577,
0.3087725937,
-0.0127360113,
-0.0093995668,
0.1674087048,
0.1074033454,
-0.3580369651,
0.2038322091,
-0.3066549003,
0.1690923274,
-0.2346718609,
0.0140983555,
0.1313336492,
-0.070213072,
-0.0464562625,
-0.0359865911,
-0.4427014291,
0.... |
https://github.com/huggingface/datasets/issues/4163 | Optional Content Warning for Datasets | Hi! You can use the `extra_gated_prompt` YAML field in a dataset card for displaying custom messages/warnings that the user must accept before gaining access to the actual dataset. This option also keeps the viewer hidden until the user agrees to terms. | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an option to select a content warning messa... | 41 | Optional Content Warning for Datasets
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
We now have hate speech datasets on the hub, like this one: https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild
I'm wondering if there is an ... | [
-0.2600831389,
-0.3120526373,
-0.0563245714,
-0.108602494,
0.0186993983,
0.3666293025,
0.352557838,
0.238814801,
0.3074608147,
0.2498291284,
0.1395488232,
0.3237808645,
-0.2098608315,
0.0323201641,
-0.4014856517,
0.0375654511,
-0.1355635077,
0.0450617895,
0.1742631644,
0.018343... |
https://github.com/huggingface/datasets/issues/4152 | ArrayND error in pyarrow 5 | Where do we bump the required pyarrow version? Any inputs on how I fix this issue? | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(a... | 16 | ArrayND error in pyarrow 5
As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int... | [
-0.2717887461,
0.0057994733,
-0.0946770236,
0.076236479,
0.1772200316,
-0.1354119927,
0.5197154284,
0.285346359,
-0.1506761312,
-0.1537640542,
-0.3803839087,
0.1065837815,
0.0012881052,
0.1034748331,
0.0800986141,
-0.1847531497,
-0.0333412327,
0.215312764,
-0.08204671,
0.163662... |
https://github.com/huggingface/datasets/issues/4152 | ArrayND error in pyarrow 5 | We need to bump it in `setup.py` as well as update some CI job to use pyarrow 6 instead of 5 in `.circleci/config.yaml` and `.github/workflows/benchmarks.yaml` | As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int64")
cast_array_to_feature(a... | 25 | ArrayND error in pyarrow 5
As found in https://github.com/huggingface/datasets/pull/3903, The ArrayND features fail on pyarrow 5:
```python
import pyarrow as pa
from datasets import Array2D
from datasets.table import cast_array_to_feature
arr = pa.array([[[0]]])
feature_type = Array2D(shape=(1, 1), dtype="int... | [
-0.2717887461,
0.0057994733,
-0.0946770236,
0.076236479,
0.1772200316,
-0.1354119927,
0.5197154284,
0.285346359,
-0.1506761312,
-0.1537640542,
-0.3803839087,
0.1065837815,
0.0012881052,
0.1034748331,
0.0800986141,
-0.1847531497,
-0.0333412327,
0.215312764,
-0.08204671,
0.163662... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | I thought I had fixed it with this after some helpful hints from @severo
```python
import datasets
token = 'hf_XXXXX'
dataset = datasets.load_dataset(
'facebook/winoground',
name='facebook--winoground',
split='train',
streaming=True,
use_auth_token=token,
)
```
but I found out that w... | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 50 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | Hi ! This dataset structure (image + labels in a JSON file) is not supported yet, though we're adding support for this in in #4069
The following structure will be supported soon:
```
metadata.json
images/
image0.png
image1.png
...
```
Where `metadata.json` is a JSON Lines file with labels or ... | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 141 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | We'll also investigate the issue with the streaming download manager in https://github.com/huggingface/datasets/issues/4139 ;) thanks for reporting | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 16 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | In the meantime, anyone can always download the images.zip and examples.jsonl files directly from huggingface.co - let me know if anyone has issues with that. | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 25 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | I mirrored the files at https://huggingface.co/datasets/facebook/winoground in a folder on my local machine `winground`
and when I tried
```python
import datasets
ds = datasets.load_from_disk('./winoground')
```
I get the following error
```python
----------------------------------------------------------------... | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 107 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | Note that `load_from_disk` is the function that reloads an Arrow dataset saved with `my_dataset.save_to_disk`.
Once we do support images with metadata you'll be able to use `load_dataset("facebook/winoground")` directly (or `load_dataset("./winoground")` of you've cloned the winoground repository locally). | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 37 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | Apologies for the delay. I added a custom dataset loading script for winoground. It should work now, with an auth token:
`examples = load_dataset('facebook/winoground', use_auth_token=<your auth token>)`
Let me know if there are any issues | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 35 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4149 | load_dataset for winoground returning decoding error | Adding the dataset loading script definitely didn't take as long as I thought it would 😅 | ## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/winoground', use_auth_token=token)
```
## Expected res... | 16 | load_dataset for winoground returning decoding error
## Describe the bug
I am trying to use datasets to load winoground and I'm getting a JSON decoding error.
## Steps to reproduce the bug
```python
from datasets import load_dataset
token = 'hf_XXXXX' # my HF access token
datasets = load_dataset('facebook/win... | [
-0.1866350323,
0.3816437423,
-0.0002151771,
0.4397816956,
0.2120900154,
0.1175744459,
0.0252054855,
0.1846736521,
0.0800222754,
-0.1368634403,
-0.1777786762,
0.0833095759,
0.0109347403,
0.0449507199,
-0.2806118429,
-0.2287354767,
-0.0568579063,
-0.0361913219,
0.0943902135,
-0.0... |
https://github.com/huggingface/datasets/issues/4146 | SAMSum dataset viewer not working | Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @albertvillanova or @mariosasko could give more details about why the dataset cannot be streamed. | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 28 | SAMSum dataset viewer not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
Currently, only the datasets that can be streamed support the dataset viewer. Maybe @lhoestq @alb... | [
-0.4890426099,
-0.1947719157,
0.019912146,
0.1662127227,
0.3226157129,
0.1772111207,
0.1533230394,
0.1406114697,
0.0136461481,
0.1777696908,
-0.1680707932,
0.3436411619,
0.014858745,
0.169584021,
-0.0610328689,
-0.1996840984,
0.1649658084,
0.0454549976,
-0.2660045624,
0.1820433... |
https://github.com/huggingface/datasets/issues/4146 | SAMSum dataset viewer not working | It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we use to stream data.
This can be fix if we host the data ourselves, which is ok since the dataset is under CC BY-NC-ND 4.0 | ## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
| 40 | SAMSum dataset viewer not working
## Dataset viewer issue for '*name of the dataset*'
**Link:** *link to the dataset viewer page*
*short description of the issue*
Am I the one who added this dataset ? Yes-No
It looks like the host (https://arxiv.org) doesn't allow HTTP Range requests, which is what we us... | [
-0.4041630626,
-0.2161870599,
0.0099699516,
0.0950218961,
0.381605804,
0.0499521047,
0.0980621651,
0.2619544566,
-0.0217940193,
0.1190104559,
-0.2075585127,
0.3886729479,
0.1510302722,
0.0338380709,
-0.0854026824,
-0.0889917836,
0.0554587245,
0.0523284972,
-0.3733109534,
0.1381... |
https://github.com/huggingface/datasets/issues/4143 | Unable to download `Wikepedia` 20220301.en version | Hi! We've recently updated the Wikipedia script, so these changes are only available on master and can be fetched as follows:
```python
dataset_wikipedia = load_dataset("wikipedia", "20220301.en", revision="master")
``` | ## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Avail... | 28 | Unable to download `Wikepedia` 20220301.en version
## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
Val... | [
-0.3187177479,
0.1336507201,
-0.1322391182,
0.2605266869,
-0.0983971879,
0.069312878,
0.3329180777,
0.4483046532,
0.2501382232,
0.3041918874,
0.2490293533,
0.1295749545,
0.0758542866,
0.327134639,
0.1037939712,
-0.4493441582,
0.2669945955,
0.1209755167,
-0.3334058225,
-0.087865... |
https://github.com/huggingface/datasets/issues/4143 | Unable to download `Wikepedia` 20220301.en version | Hi, how can I load the previous "20200501.en" version of wikipedia which had been downloaded to the default path? Thanks! | ## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
ValueError: BuilderConfig 20220301.en not found.
Avail... | 20 | Unable to download `Wikepedia` 20220301.en version
## Describe the bug
Unable to download `Wikepedia` dataset, 20220301.en version
## Steps to reproduce the bug
```python
!pip install apache_beam mwparserfromhell
dataset_wikipedia = load_dataset("wikipedia", "20220301.en")
```
## Actual results
```
Val... | [
-0.3187177479,
0.1336507201,
-0.1322391182,
0.2605266869,
-0.0983971879,
0.069312878,
0.3329180777,
0.4483046532,
0.2501382232,
0.3041918874,
0.2490293533,
0.1295749545,
0.0758542866,
0.327134639,
0.1037939712,
-0.4493441582,
0.2669945955,
0.1209755167,
-0.3334058225,
-0.087865... |
https://github.com/huggingface/datasets/issues/4140 | Error loading arxiv data set | Hi! I think this error may be related to using an older version of the library. I was able to load the dataset without any issues using the latest version of `datasets`. Can you upgrade to the latest version of `datasets` and try again? :) | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summari... | 45 | Error loading arxiv data set
## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main... | [
-0.1380846947,
0.1616727114,
0.0079347193,
0.2819574773,
0.2916662395,
-0.047817491,
0.0685460269,
0.5627236962,
0.054112047,
-0.0670319647,
-0.063769348,
0.2794142067,
0.0753566325,
0.0028108654,
0.1414510608,
-0.1872990429,
-0.0851801038,
0.1162745208,
0.0183116663,
-0.009948... |
https://github.com/huggingface/datasets/issues/4140 | Error loading arxiv data set | Hi! As @stevhliu suggested, to fix the issue, update the lib to the newest version with:
```
pip install -U datasets
```
and download the dataset as follows:
```python
from datasets import load_dataset
dset = load_dataset('scientific_papers', 'arxiv', download_mode="force_redownload")
``` | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summari... | 39 | Error loading arxiv data set
## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main... | [
-0.1380846947,
0.1616727114,
0.0079347193,
0.2819574773,
0.2916662395,
-0.047817491,
0.0685460269,
0.5627236962,
0.054112047,
-0.0670319647,
-0.063769348,
0.2794142067,
0.0753566325,
0.0028108654,
0.1414510608,
-0.1872990429,
-0.0851801038,
0.1162745208,
0.0183116663,
-0.009948... |
https://github.com/huggingface/datasets/issues/4140 | Error loading arxiv data set | Thanks for the quick response! It works now. The problem is that I used nlp. load_dataset instead of datasets. load_dataset. | ## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main(args)
File "scripts/summari... | 20 | Error loading arxiv data set
## Describe the bug
A clear and concise description of what the bug is.
I met the error below when loading arxiv dataset via `nlp.load_dataset('scientific_papers', 'arxiv',)`.
```
Traceback (most recent call last):
File "scripts/summarization.py", line 354, in <module>
main... | [
-0.1380846947,
0.1616727114,
0.0079347193,
0.2819574773,
0.2916662395,
-0.047817491,
0.0685460269,
0.5627236962,
0.054112047,
-0.0670319647,
-0.063769348,
0.2794142067,
0.0753566325,
0.0028108654,
0.1414510608,
-0.1872990429,
-0.0851801038,
0.1162745208,
0.0183116663,
-0.009948... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | I thought this issue was related to the error I was seeing, but upon consideration I'd think the dataset viewer would return a 500 (unable to create the split like me) or a 404 (unable to load split b/c it was never created) error if it was having the issue I was seeing in #4149. 401 message makes it look like dataset ... | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 84 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.2158226818,
0.3918247223,
0.0195265897,
0.2488378882,
-0.1773584932,
0.0117628695,
0.2099894434,
-0.0989727303,
-0.3108904958,
-0.1287291795,
-0.2136164159,
-0.0137757929,
-0.0942658782,
0.0527165309,
-0.1679512858,
0.0808953792,
-0.2165971696,
-0.1095398068,
-0.066265963,
-... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | To replicate:
```python
>>> import datasets
>>> dataset= datasets.load_dataset('facebook/winoground', name='facebook--winoground', split='train', use_auth_token="hf_app_...", streaming=True)
>>> next(iter(dataset))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/h... | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 200 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.2085140198,
0.3726015687,
0.0102579454,
0.1812564433,
0.035316363,
0.0951514691,
0.2718737125,
-0.0505480245,
-0.2809483707,
-0.0899924561,
-0.3027311862,
-0.105369404,
-0.1971441656,
-0.0161183253,
-0.0181891136,
0.2671881914,
-0.0818876177,
-0.1981697232,
-0.1157177165,
-0... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | ~~Using your command to replicate and changing `use_token` to `use_auth_token` fixes the problem I was seeing in #4149.~~
Nevermind it gave me an iterator to a method returning the same 401s. Changing `use_token` to `use_auth_token` does not fix the issue. | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 40 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.3063986301,
0.3560656309,
0.0578553751,
0.0977232009,
0.0235412195,
-0.0176560562,
0.2522362769,
0.0431539416,
-0.2548046112,
-0.0015680429,
-0.272341311,
0.0449237265,
-0.1244985685,
0.1341281831,
-0.1148338914,
0.2799378633,
-0.0799767449,
-0.1969622523,
-0.2051309198,
-0.... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | After investigation with @severo , we found a potential culprit: https://github.com/huggingface/datasets/blob/3cd0a009a43f9f174056d70bfa2ca32216181926/src/datasets/utils/streaming_download_manager.py#L610-L624
The streaming manager does not seem to pass `use_auth_token` to `fsspec` when streaming and not iterating c... | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 35 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.3267650604,
0.3238722086,
0.0456444472,
0.2188278735,
0.1186774671,
0.06968797,
0.046101857,
-0.2330045551,
-0.28247419,
-0.0623377971,
-0.289550066,
-0.0948182791,
-0.1721875966,
0.1533356756,
-0.0418442488,
0.2980025411,
-0.0844593048,
-0.2825906575,
-0.1895950288,
0.06656... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | I was able to reproduce it on a private dataset, let me work on a fix | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 16 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.2385866791,
0.3803409636,
0.0198633112,
0.2570517957,
0.0456076525,
0.1084087119,
0.0905541405,
-0.128638804,
-0.3383243978,
-0.0183006171,
-0.3247798681,
-0.022291854,
-0.1956522763,
0.0001293665,
-0.0990194157,
0.1704097092,
-0.1324594617,
-0.213983506,
-0.1198510528,
-0.0... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | Thanks for the heads up, I still need to fix some tests that are failing in the CI before merging ;) | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 21 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.2774170935,
0.3570375741,
0.0189055726,
0.2028404623,
-0.0644622073,
0.1670638472,
0.1320149451,
-0.1436326951,
-0.3657690883,
-0.0115631241,
-0.3856449425,
-0.1237949207,
-0.12703228,
0.1360718608,
-0.1109700277,
0.2757478058,
-0.0134431385,
-0.2193367779,
-0.1193013713,
-0... |
https://github.com/huggingface/datasets/issues/4139 | Dataset viewer issue for Winoground | The fix has been merged, we'll do a new release soon, and update the dataset viewer | ## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to authorization, but I can access the files f... | 16 | Dataset viewer issue for Winoground
## Dataset viewer issue for 'Winoground'
**Link:** [*link to the dataset viewer page*](https://huggingface.co/datasets/facebook/winoground/viewer/facebook--winoground/train)
*short description of the issue*
Getting 401, message='Unauthorized'
The dataset is subject to autho... | [
-0.2664144635,
0.3596874177,
0.0041653877,
0.2217418849,
0.0226435754,
0.1016643941,
0.1090605333,
-0.1083729938,
-0.3371365368,
-0.017149156,
-0.3307524025,
-0.0515377708,
-0.2027523667,
0.0162699874,
-0.0730734169,
0.2063014954,
-0.1037210748,
-0.2384270281,
-0.1375193894,
-0... |
https://github.com/huggingface/datasets/issues/4138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | To reproduce:
```python
>>> import datasets
>>> datasets.get_dataset_split_names('MalakhovIlya/RuREBus', config_name='raw_txt')
Traceback (most recent call last):
File "/home/slesage/hf/datasets-preview-backend/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 280, in get_dataset_config_info
fo... | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | 143 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Stat... | [
-0.0074534067,
-0.0631545484,
-0.0180229768,
0.3883336186,
0.4040850997,
-0.0412305705,
-0.0985663086,
0.1355754584,
-0.1796884537,
0.2541795373,
-0.1795740128,
0.4243224263,
-0.0196610987,
0.1638035178,
0.0396400429,
-0.1715617478,
0.1249552965,
-0.0598616861,
-0.1608453095,
-... |
https://github.com/huggingface/datasets/issues/4138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | Hi! This issue stems from the fact that `xwalk`, which is a streamable version of `os.walk`, doesn't support the `topdown` param due to `fsspec`'s `walk` also not supporting it, so fixing this issue could be tricky.
@MalakhovIlyaPavlovich You can avoid the error by tweaking your data processing and not using this ... | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | 59 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Stat... | [
-0.059939228,
0.0004643627,
0.0007156899,
0.2929068506,
0.4725809693,
-0.0914142802,
-0.2173765898,
0.0923400596,
-0.0890551955,
0.357228905,
-0.0973184556,
0.3023453951,
0.031104533,
0.2337282598,
-0.040130768,
-0.2049592584,
0.1728460789,
-0.1248709559,
-0.0916411132,
-0.2312... |
https://github.com/huggingface/datasets/issues/4138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | @mariosasko thank you for your reply. I couldn't reproduce error showed by @severo either on Ubuntu 20.04.3 LTS, Windows 10 and Google Colab environments. But trying to avoid using os.walk(topdown=False) and Path.rename(), In _split_generators I replaced
```
def decode_file_names(folder):
for root, dirs, files i... | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | 213 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Stat... | [
-0.0071319346,
0.0737962648,
0.0328600109,
0.3899852931,
0.4107298851,
-0.1913454682,
-0.0048413379,
0.2006876469,
-0.2905463874,
0.3830552399,
-0.2088659853,
0.4631153941,
0.0013721877,
0.1579345316,
-0.0157347899,
-0.2802539766,
0.1035854295,
-0.1077969447,
-0.0890320688,
-0.... |
https://github.com/huggingface/datasets/issues/4138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | This is what I get when I try to stream the `raw_txt` subset:
```python
>>> dset = load_dataset("MalakhovIlya/RuREBus", "raw_txt", split="raw_txt", streaming=True)
>>> next(iter(dset))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
```
So there is a bug in your script. | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | 44 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Stat... | [
-0.1574705839,
-0.1151819378,
0.0108459154,
0.3141182065,
0.3185798228,
0.0488690995,
-0.0708808005,
0.1852437556,
-0.0354310609,
0.2366666198,
-0.0272180364,
0.2307993174,
0.0007413884,
0.2097198665,
0.1422531903,
-0.1830073148,
0.1044389158,
-0.0119180959,
-0.128848955,
-0.19... |
https://github.com/huggingface/datasets/issues/4138 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract() | streaming=True helped me to find solution. I fixed
```
def extract(zip_file_path):
p = Path(zip_file_path)
dest_dir = str(p.parent / 'extracted' / p.stem)
os.makedirs(dest_dir, exist_ok=True)
with zipfile.ZipFile(zip_file_path) as archive:
for file_info in tqdm(archive.infolist(), desc='E... | ## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Status code: 400
Exception: TypeError
Message: xwalk() got an unexpected keyword argument 'topdow... | 89 | Incorrect Russian filenames encoding after extraction by datasets.DownloadManager.download_and_extract()
## Dataset viewer issue for 'MalakhovIlya/RuREBus'
**Link:** https://huggingface.co/datasets/MalakhovIlya/RuREBus
**Description**
Using os.walk(topdown=False) in DatasetBuilder causes following error:
Stat... | [
-0.0119793462,
-0.1327393204,
0.0184508692,
0.4076127708,
0.4494683743,
-0.187553212,
-0.1943377405,
0.1840333492,
-0.3235189617,
0.28901124,
-0.2319498509,
0.4692307711,
0.0145375272,
0.0691847429,
0.0689553469,
-0.2021198571,
0.0421915241,
-0.1188558117,
-0.2840292454,
-0.275... |
https://github.com/huggingface/datasets/issues/4134 | ELI5 supporting documents | Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;) | if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs?? | 19 | ELI5 supporting documents
if i am using dense search to create supporting documents for eli5 how much time it will take bcz i read somewhere that it takes about 18 hrs??
Hi ! Please post your question on the [forum](https://discuss.huggingface.co/), more people will be able to help you there ;) | [
0.2901054323,
-0.5284781456,
-0.2327079624,
0.2695806324,
0.0564925112,
-0.2928771675,
0.2667973936,
-0.1374765933,
-0.0974975601,
0.3248697817,
0.319763124,
-0.4961234629,
-0.1550011188,
0.3862406015,
-0.2598349452,
0.1698532701,
0.3164281845,
0.197365135,
0.2664730847,
0.1186... |
https://github.com/huggingface/datasets/issues/4133 | HANS dataset preview broken | The dataset cannot be loaded, be it in normal or streaming mode.
```python
>>> import datasets
>>> dataset=datasets.load_dataset("hans", split="train", streaming=True)
>>> next(iter(dataset))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/slesage/hf/datasets-preview-back... | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| 224 | HANS dataset preview broken
## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
The dataset cannot be loaded, be it in normal or streaming mode.
... | [
-0.346259594,
-0.3120446801,
0.0280241612,
0.3308504522,
0.1779608727,
-0.0072313831,
0.201488018,
0.2768096924,
0.0376163647,
-0.036823485,
-0.3076952994,
0.1620683521,
-0.0498246849,
0.0480444469,
0.1839516014,
-0.2982971966,
0.0739781484,
0.2516166568,
-0.4996129572,
0.22415... |
https://github.com/huggingface/datasets/issues/4133 | HANS dataset preview broken | Hi! I've opened a PR that should make this dataset stremable. You can test it as follows:
```python
from datasets import load_dataset
dset = load_dataset("hans", split="train", streaming=True, revision="49decd29839c792ecc24ac88f861cbdec30c1c40")
```
@severo The current script doesn't throw an error in normal mod... | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| 65 | HANS dataset preview broken
## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
Hi! I've opened a PR that should make this dataset stremable. You ... | [
-0.4986243248,
-0.1840554327,
0.032281667,
0.2166861296,
0.2157129794,
0.0747323558,
0.1014707163,
0.2017798275,
-0.0046712356,
0.1359724402,
-0.2016773522,
0.1920164078,
-0.0896616727,
0.2093836367,
0.0798815563,
-0.3173092306,
0.1725661606,
0.1805787832,
-0.5114350915,
0.3107... |
https://github.com/huggingface/datasets/issues/4133 | HANS dataset preview broken | Thanks for this. It works well, thanks! The dataset viewer is using https://github.com/huggingface/datasets/releases/tag/2.0.0, I'm eager to upgrade to 2.0.1 😉 | ## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
| 20 | HANS dataset preview broken
## Dataset viewer issue for '*hans*'
**Link:** [https://huggingface.co/datasets/hans](https://huggingface.co/datasets/hans)
HANS dataset preview is broken with error 400
Am I the one who added this dataset ? No
Thanks for this. It works well, thanks! The dataset viewer is usin... | [
-0.2305552959,
-0.2901004851,
0.0092154648,
0.262542516,
0.0672578663,
0.0622462146,
0.0577356815,
0.2000459135,
0.0401800871,
0.0267511886,
-0.3102258742,
0.0882028714,
-0.0385140143,
0.0455970019,
0.1961985677,
-0.283857584,
0.2299973369,
0.121255897,
-0.4576295018,
0.1841763... |
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | A quick hack I have found is that we can call the image first before running the transforms and it makes sure the image is decoded before being passed on.
For this I just needed to add `example['img'] = example['img']` to the top of my `generate_flipped_data` function, defined above, so that image decode in invoked.... | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 163 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
https://github.com/huggingface/datasets/issues/4124 | Image decoding often fails when transforming Image datasets | Hi @RafayAK, thanks for reporting.
Current implementation of the Image feature performs the decoding only if the "img" field is accessed by the mapped function.
In your original `generate_flipped_data` function:
- it only accesses the "img" field (and thus performs decoding) if `rng.random() > p`;
- on the othe... | ## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image decode invocation does not take place and the resulting image pa... | 237 | Image decoding often fails when transforming Image datasets
## Describe the bug
When transforming/modifying images in an image dataset using the `map` function the PIL images often fail to decode in time for the image transforms, causing errors.
Using a debugger it is easy to see what the problem is, the Image d... | [
-0.0782001764,
-0.0185160097,
-0.1789414287,
0.1256359518,
0.1442411542,
-0.1011210158,
0.061807964,
0.2104717195,
-0.1569966078,
0.1313689649,
0.1810033619,
0.5930034518,
0.199265331,
-0.1440647393,
-0.1841397285,
-0.1133295894,
0.1441317201,
0.3657739758,
-0.1194041669,
-0.15... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.