8.7. 8.3. Weather prediction#
Figure: One of the weather prediction results after training the GNN models in this exercise.
This notebook is converted from this Github branch: mllam/neural-lam
8.7.1. 8.3.1 Introduction#
By the end of this notebook, you’ll be able to:
Construct GNN models to predict the weather;
Engage in loading trained models structures;
Try to be familiar the PyTorch framework;
Further develop your skills in writing function.
Graph Neural Networks (GNNs) have emerged as an innovative tool for weather prediction to model the inherented complex spatial dependencies. Traditional methods often struggle to capture the intricate relationships between atmospheric variables across vast geographical regions. By representing these regions as nodes and their interactions as edges, GNNs can effectively model the spatial-temporal correlations that drive weather patterns.
However, the adoption of GNNs in weather prediction is not without challenges. High-resolution meteorological data is essential to unlock their full potential, and the computational demands of GNNs can be significant. Nevertheless, their integration with other deep learning frameworks has shown remarkable potential, enhancing the accuracy and scalability of forecasting systems.
In this exercise, we will use the MEPS dataset for limited area model through the graph-based probabilistic weather prediction. MEPS dataset explaination The weather forecasting model for the Nordic area is the MetCoOp Ensemble Prediction System (MEPS). MEPS is the Meteorological Cooperation on Operational Numeric Weather Prediction (NWP) between meteorological institutes at Finland, Norway, Sweden and Estonia. It has a horizontal resolution of 2.5 kilometers.
The probabilistic models are intended to provide a range of possibilities. This complements the existing forecast methods in order to better communicate forecast uncertainties before and during weather events. For example, you may see your local forecast saying low temperature of -10°C. This is the official NWS forecast, but there is still uncertainty. The probabilistic information can help you understand more about this uncertainty to make informed decisions. This range of possibilities is important to consider when planning and making your decisions. For instance, if the forecast low temperature for your location is -10°C, but there is still a possibility you could see a low of -15°C, then you may want to consider preparing to protect tender vegetation for a possible freeze.
Traditional forecasts have only given a single forecast for temperature and precipitation, though they can change from day to day as the event approaches. The goal of probabilistic weather forecast models is to provide the “Goal Posts”, or range of possibilities for a specific event. That’s the reason why people explore it through graph neural networks.
References:
Lam, Remi, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Ferran Alet, Suman Ravuri et al. “Learning skillful medium-range global weather forecasting.” Science 382, no. 6677 (2023): 1416-1421.
Oskarsson, J., Landelius, T., & Lindsten, F. (2023). Graph-based Neural Weather Prediction for Limited Area Modeling. NeurIPS 2023 Workshop on Tackling Climate Change with Machine Learning, New Orleans, Louisiana, United States, arXiv:2309.17370.
Oskarsson, J., Landelius, T., Deisenroth, M. & Lindsten, F. (2024). Probabilistic Weather Forecasting with Hierarchical Graph Neural Networks. NeurIPS 2024, Vancouver, Canada, arXiv:2406.04759.
8.7.2. 8.3.2 Set-up environment#
Follow the steps below to create the necessary python environment and save dataset for our interesting journey of Graph neural networks :)
Connect to your Google Drive
Download the dataset and necessary codes
Always remember to use python 3.10. (Colab fit it)
Install version 2.0.1 of PyTorch, version 117 of CUDA.
Install required packages specified as below.
Install PyTorch Geometric version 2.3.1.
# First we should connect the GOOGLE DRIVE
from google.colab import drive
drive.mount('/content/drive')
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[1], line 2
1 # First we should connect the GOOGLE DRIVE
----> 2 from google.colab import drive
3 drive.mount('/content/drive')
ModuleNotFoundError: No module named 'google'
# Download the dataset and remember to upload them to your GOOGLE DRIVE
import pooch
data_url = 'https://unils-my.sharepoint.com/:u:/g/personal/haokun_liu_unil_ch/EVp6Oym5i_RElMf7jYz9VboB7ZdpyYKuTlBWLeVq9fyPTA?e=qMibgG&download=1'
hash = 'ad511b41c3d705e6353fe79f5a4c1bae4092844e221ed7cdbff2fc2d0e7075df'
data = pooch.retrieve(data_url, known_hash = hash, processor=pooch.Unzip())
# Check the current version of torch in your colab; if the torch is 2.0.1 and cuda is 117, we can continue; Otherwise, you should check the below steps
import torch
print(torch.__version__)
2.0.1+cu117
# Uninstall the current version of torch and cuda for your GPU at Colab, because we need to adjust
!pip uninstall -y torch torchvision torchaudio
Found existing installation: torch 2.5.1+cu121
Uninstalling torch-2.5.1+cu121:
Successfully uninstalled torch-2.5.1+cu121
Found existing installation: torchvision 0.20.1+cu121
Uninstalling torchvision-0.20.1+cu121:
Successfully uninstalled torchvision-0.20.1+cu121
Found existing installation: torchaudio 2.5.1+cu121
Uninstalling torchaudio-2.5.1+cu121:
Successfully uninstalled torchaudio-2.5.1+cu121
# Adjust the version for the corresponding coding environment (i.e., torch must 2.0.1, and other should be corresponding)
!pip install torch==2.0.1 torchvision torchaudio torchtext
Collecting torch==2.0.1
Downloading torch-2.0.1-cp310-cp310-manylinux1_x86_64.whl.metadata (24 kB)
Collecting torchvision
Downloading torchvision-0.20.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.1 kB)
Collecting torchaudio
Downloading torchaudio-2.5.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Collecting torchtext
Downloading torchtext-0.18.0-cp310-cp310-manylinux1_x86_64.whl.metadata (7.9 kB)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch==2.0.1) (3.16.1)
Requirement already satisfied: typing-extensions in /usr/local/lib/python3.10/dist-packages (from torch==2.0.1) (4.12.2)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch==2.0.1) (1.13.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch==2.0.1) (3.4.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch==2.0.1) (3.1.4)
Collecting nvidia-cuda-nvrtc-cu11==11.7.99 (from torch==2.0.1)
Downloading nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-cuda-runtime-cu11==11.7.99 (from torch==2.0.1)
Downloading nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cuda-cupti-cu11==11.7.101 (from torch==2.0.1)
Downloading nvidia_cuda_cupti_cu11-11.7.101-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cudnn-cu11==8.5.0.96 (from torch==2.0.1)
Downloading nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cublas-cu11==11.10.3.66 (from torch==2.0.1)
Downloading nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cufft-cu11==10.9.0.58 (from torch==2.0.1)
Downloading nvidia_cufft_cu11-10.9.0.58-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)
Collecting nvidia-curand-cu11==10.2.10.91 (from torch==2.0.1)
Downloading nvidia_curand_cu11-10.2.10.91-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusolver-cu11==11.4.0.1 (from torch==2.0.1)
Downloading nvidia_cusolver_cu11-11.4.0.1-2-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-cusparse-cu11==11.7.4.91 (from torch==2.0.1)
Downloading nvidia_cusparse_cu11-11.7.4.91-py3-none-manylinux1_x86_64.whl.metadata (1.6 kB)
Collecting nvidia-nccl-cu11==2.14.3 (from torch==2.0.1)
Downloading nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl.metadata (1.8 kB)
Collecting nvidia-nvtx-cu11==11.7.91 (from torch==2.0.1)
Downloading nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl.metadata (1.7 kB)
Collecting triton==2.0.0 (from torch==2.0.1)
Downloading triton-2.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.0 kB)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from nvidia-cublas-cu11==11.10.3.66->torch==2.0.1) (75.1.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.10/dist-packages (from nvidia-cublas-cu11==11.10.3.66->torch==2.0.1) (0.45.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch==2.0.1) (3.30.5)
Collecting lit (from triton==2.0.0->torch==2.0.1)
Downloading lit-18.1.8-py3-none-any.whl.metadata (2.5 kB)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torchvision) (1.26.4)
INFO: pip is looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Collecting torchvision
Downloading torchvision-0.20.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.1 kB)
Downloading torchvision-0.19.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.0 kB)
Downloading torchvision-0.19.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.0 kB)
Downloading torchvision-0.18.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.18.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.17.2-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.17.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
INFO: pip is still looking at multiple versions of torchvision to determine which version is compatible with other requirements. This could take a while.
Downloading torchvision-0.17.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from torchvision) (2.32.3)
Downloading torchvision-0.16.2-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.16.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.16.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.6 kB)
Downloading torchvision-0.15.2-cp310-cp310-manylinux1_x86_64.whl.metadata (11 kB)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.10/dist-packages (from torchvision) (11.0.0)
INFO: pip is looking at multiple versions of torchaudio to determine which version is compatible with other requirements. This could take a while.
Collecting torchaudio
Downloading torchaudio-2.5.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.4.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.4.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.3.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.3.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.2.2-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.2.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
INFO: pip is still looking at multiple versions of torchaudio to determine which version is compatible with other requirements. This could take a while.
Downloading torchaudio-2.2.0-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.1.2-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.1.1-cp310-cp310-manylinux1_x86_64.whl.metadata (6.4 kB)
Downloading torchaudio-2.1.0-cp310-cp310-manylinux1_x86_64.whl.metadata (5.7 kB)
Downloading torchaudio-2.0.2-cp310-cp310-manylinux1_x86_64.whl.metadata (1.2 kB)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from torchtext) (4.66.6)
INFO: pip is looking at multiple versions of torchtext to determine which version is compatible with other requirements. This could take a while.
Collecting torchtext
Downloading torchtext-0.17.2-cp310-cp310-manylinux1_x86_64.whl.metadata (7.9 kB)
Downloading torchtext-0.17.1-cp310-cp310-manylinux1_x86_64.whl.metadata (7.6 kB)
Downloading torchtext-0.17.0-cp310-cp310-manylinux1_x86_64.whl.metadata (7.6 kB)
Downloading torchtext-0.16.2-cp310-cp310-manylinux1_x86_64.whl.metadata (7.5 kB)
Downloading torchtext-0.16.1-cp310-cp310-manylinux1_x86_64.whl.metadata (7.5 kB)
Downloading torchtext-0.16.0-cp310-cp310-manylinux1_x86_64.whl.metadata (7.5 kB)
Downloading torchtext-0.15.2-cp310-cp310-manylinux1_x86_64.whl.metadata (7.4 kB)
Collecting torchdata==0.6.1 (from torchtext)
Downloading torchdata-0.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (13 kB)
Requirement already satisfied: urllib3>=1.25 in /usr/local/lib/python3.10/dist-packages (from torchdata==0.6.1->torchtext) (2.2.3)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch==2.0.1) (3.0.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->torchvision) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->torchvision) (3.10)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->torchvision) (2024.8.30)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from sympy->torch==2.0.1) (1.3.0)
Downloading torch-2.0.1-cp310-cp310-manylinux1_x86_64.whl (619.9 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 619.9/619.9 MB 3.2 MB/s eta 0:00:00
?25hDownloading nvidia_cublas_cu11-11.10.3.66-py3-none-manylinux1_x86_64.whl (317.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 317.1/317.1 MB 4.2 MB/s eta 0:00:00
?25hDownloading nvidia_cuda_cupti_cu11-11.7.101-py3-none-manylinux1_x86_64.whl (11.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.8/11.8 MB 23.4 MB/s eta 0:00:00
?25hDownloading nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl (21.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 21.0/21.0 MB 78.0 MB/s eta 0:00:00
?25hDownloading nvidia_cuda_runtime_cu11-11.7.99-py3-none-manylinux1_x86_64.whl (849 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 849.3/849.3 kB 53.7 MB/s eta 0:00:00
?25hDownloading nvidia_cudnn_cu11-8.5.0.96-2-py3-none-manylinux1_x86_64.whl (557.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 557.1/557.1 MB 2.9 MB/s eta 0:00:00
?25hDownloading nvidia_cufft_cu11-10.9.0.58-py3-none-manylinux2014_x86_64.whl (168.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 168.4/168.4 MB 7.4 MB/s eta 0:00:00
?25hDownloading nvidia_curand_cu11-10.2.10.91-py3-none-manylinux1_x86_64.whl (54.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 54.6/54.6 MB 18.6 MB/s eta 0:00:00
?25hDownloading nvidia_cusolver_cu11-11.4.0.1-2-py3-none-manylinux1_x86_64.whl (102.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 102.6/102.6 MB 8.5 MB/s eta 0:00:00
?25hDownloading nvidia_cusparse_cu11-11.7.4.91-py3-none-manylinux1_x86_64.whl (173.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 173.2/173.2 MB 6.5 MB/s eta 0:00:00
?25hDownloading nvidia_nccl_cu11-2.14.3-py3-none-manylinux1_x86_64.whl (177.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 177.1/177.1 MB 6.7 MB/s eta 0:00:00
?25hDownloading nvidia_nvtx_cu11-11.7.91-py3-none-manylinux1_x86_64.whl (98 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 98.6/98.6 kB 9.0 MB/s eta 0:00:00
?25hDownloading triton-2.0.0-1-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (63.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 63.3/63.3 MB 11.4 MB/s eta 0:00:00
?25hDownloading torchvision-0.15.2-cp310-cp310-manylinux1_x86_64.whl (6.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 6.0/6.0 MB 108.6 MB/s eta 0:00:00
?25hDownloading torchaudio-2.0.2-cp310-cp310-manylinux1_x86_64.whl (4.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.4/4.4 MB 41.1 MB/s eta 0:00:00
?25hDownloading torchtext-0.15.2-cp310-cp310-manylinux1_x86_64.whl (2.0 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 MB 73.3 MB/s eta 0:00:00
?25hDownloading torchdata-0.6.1-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.6 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.6/4.6 MB 80.4 MB/s eta 0:00:00
?25hDownloading lit-18.1.8-py3-none-any.whl (96 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 96.4/96.4 kB 9.4 MB/s eta 0:00:00
?25hInstalling collected packages: lit, nvidia-nvtx-cu11, nvidia-nccl-cu11, nvidia-cusparse-cu11, nvidia-curand-cu11, nvidia-cufft-cu11, nvidia-cuda-runtime-cu11, nvidia-cuda-nvrtc-cu11, nvidia-cuda-cupti-cu11, nvidia-cublas-cu11, nvidia-cusolver-cu11, nvidia-cudnn-cu11, triton, torch, torchdata, torchvision, torchtext, torchaudio
Successfully installed lit-18.1.8 nvidia-cublas-cu11-11.10.3.66 nvidia-cuda-cupti-cu11-11.7.101 nvidia-cuda-nvrtc-cu11-11.7.99 nvidia-cuda-runtime-cu11-11.7.99 nvidia-cudnn-cu11-8.5.0.96 nvidia-cufft-cu11-10.9.0.58 nvidia-curand-cu11-10.2.10.91 nvidia-cusolver-cu11-11.4.0.1 nvidia-cusparse-cu11-11.7.4.91 nvidia-nccl-cu11-2.14.3 nvidia-nvtx-cu11-11.7.91 torch-2.0.1 torchaudio-2.0.2 torchdata-0.6.1 torchtext-0.15.2 torchvision-0.15.2 triton-2.0.0
# Load the necessary packages, based on the requirements of the model
!pip install pytorch-lightning==2.0.3
!pip install Cartopy==0.22.0
!pip install tueplots==0.0.8
!pip install codespell==2.0.0
!pip install black==21.9b0
!pip install isort==5.9.3
!pip install flake8==4.0.1
!pip install pylint==3.0.3
!pip install pre-commit==2.15.0
# Install PyTorch Geometric version 2.2.0
!pip install pyg-lib==0.2.0 torch-scatter==2.1.1 torch-sparse==0.6.17 torch-cluster==1.6.1\
torch-geometric==2.3.1 -f https://pytorch-geometric.com/whl/torch-2.0.1+cu117.html
Collecting pytorch-lightning==2.0.3
Downloading pytorch_lightning-2.0.3-py3-none-any.whl.metadata (23 kB)
Requirement already satisfied: numpy>=1.17.2 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (1.26.4)
Requirement already satisfied: torch>=1.11.0 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (2.0.1)
Requirement already satisfied: tqdm>=4.57.0 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (4.66.6)
Requirement already satisfied: PyYAML>=5.4 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (6.0.2)
Requirement already satisfied: fsspec>2021.06.0 in /usr/local/lib/python3.10/dist-packages (from fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (2024.10.0)
Collecting torchmetrics>=0.7.0 (from pytorch-lightning==2.0.3)
Downloading torchmetrics-1.6.0-py3-none-any.whl.metadata (20 kB)
Requirement already satisfied: packaging>=17.1 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (24.2)
Requirement already satisfied: typing-extensions>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from pytorch-lightning==2.0.3) (4.12.2)
Collecting lightning-utilities>=0.7.0 (from pytorch-lightning==2.0.3)
Downloading lightning_utilities-0.11.8-py3-none-any.whl.metadata (5.2 kB)
Requirement already satisfied: aiohttp!=4.0.0a0,!=4.0.0a1 in /usr/local/lib/python3.10/dist-packages (from fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (3.11.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.10/dist-packages (from lightning-utilities>=0.7.0->pytorch-lightning==2.0.3) (75.1.0)
Requirement already satisfied: filelock in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (3.16.1)
Requirement already satisfied: sympy in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (1.13.1)
Requirement already satisfied: networkx in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (3.4.2)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (3.1.4)
Requirement already satisfied: nvidia-cuda-nvrtc-cu11==11.7.99 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.7.99)
Requirement already satisfied: nvidia-cuda-runtime-cu11==11.7.99 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.7.99)
Requirement already satisfied: nvidia-cuda-cupti-cu11==11.7.101 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.7.101)
Requirement already satisfied: nvidia-cudnn-cu11==8.5.0.96 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (8.5.0.96)
Requirement already satisfied: nvidia-cublas-cu11==11.10.3.66 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.10.3.66)
Requirement already satisfied: nvidia-cufft-cu11==10.9.0.58 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (10.9.0.58)
Requirement already satisfied: nvidia-curand-cu11==10.2.10.91 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (10.2.10.91)
Requirement already satisfied: nvidia-cusolver-cu11==11.4.0.1 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.4.0.1)
Requirement already satisfied: nvidia-cusparse-cu11==11.7.4.91 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.7.4.91)
Requirement already satisfied: nvidia-nccl-cu11==2.14.3 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (2.14.3)
Requirement already satisfied: nvidia-nvtx-cu11==11.7.91 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (11.7.91)
Requirement already satisfied: triton==2.0.0 in /usr/local/lib/python3.10/dist-packages (from torch>=1.11.0->pytorch-lightning==2.0.3) (2.0.0)
Requirement already satisfied: wheel in /usr/local/lib/python3.10/dist-packages (from nvidia-cublas-cu11==11.10.3.66->torch>=1.11.0->pytorch-lightning==2.0.3) (0.45.0)
Requirement already satisfied: cmake in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.11.0->pytorch-lightning==2.0.3) (3.30.5)
Requirement already satisfied: lit in /usr/local/lib/python3.10/dist-packages (from triton==2.0.0->torch>=1.11.0->pytorch-lightning==2.0.3) (18.1.8)
Requirement already satisfied: aiohappyeyeballs>=2.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (2.4.3)
Requirement already satisfied: aiosignal>=1.1.2 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (1.3.1)
Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (24.2.0)
Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (1.5.0)
Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (6.1.0)
Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (0.2.0)
Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (1.17.1)
Requirement already satisfied: async-timeout<6.0,>=4.0 in /usr/local/lib/python3.10/dist-packages (from aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (4.0.3)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch>=1.11.0->pytorch-lightning==2.0.3) (3.0.2)
Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from sympy->torch>=1.11.0->pytorch-lightning==2.0.3) (1.3.0)
Requirement already satisfied: idna>=2.0 in /usr/local/lib/python3.10/dist-packages (from yarl<2.0,>=1.17.0->aiohttp!=4.0.0a0,!=4.0.0a1->fsspec[http]>2021.06.0->pytorch-lightning==2.0.3) (3.10)
Downloading pytorch_lightning-2.0.3-py3-none-any.whl (720 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 720.6/720.6 kB 14.9 MB/s eta 0:00:00
?25hDownloading lightning_utilities-0.11.8-py3-none-any.whl (26 kB)
Downloading torchmetrics-1.6.0-py3-none-any.whl (926 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 926.4/926.4 kB 45.8 MB/s eta 0:00:00
?25hInstalling collected packages: lightning-utilities, torchmetrics, pytorch-lightning
Successfully installed lightning-utilities-0.11.8 pytorch-lightning-2.0.3 torchmetrics-1.6.0
Collecting Cartopy==0.22.0
Downloading Cartopy-0.22.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (15 kB)
Requirement already satisfied: numpy>=1.21 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (1.26.4)
Requirement already satisfied: matplotlib>=3.4 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (3.8.0)
Requirement already satisfied: shapely>=1.7 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (2.0.6)
Requirement already satisfied: packaging>=20 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (24.2)
Requirement already satisfied: pyshp>=2.1 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (2.3.1)
Requirement already satisfied: pyproj>=3.1.0 in /usr/local/lib/python3.10/dist-packages (from Cartopy==0.22.0) (3.7.0)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (4.54.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (1.4.7)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib>=3.4->Cartopy==0.22.0) (2.8.2)
Requirement already satisfied: certifi in /usr/local/lib/python3.10/dist-packages (from pyproj>=3.1.0->Cartopy==0.22.0) (2024.8.30)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib>=3.4->Cartopy==0.22.0) (1.16.0)
Downloading Cartopy-0.22.0-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 11.8/11.8 MB 55.1 MB/s eta 0:00:00
?25hInstalling collected packages: Cartopy
Successfully installed Cartopy-0.22.0
Collecting tueplots==0.0.8
Downloading tueplots-0.0.8-py3-none-any.whl.metadata (4.5 kB)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.10/dist-packages (from tueplots==0.0.8) (3.8.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from tueplots==0.0.8) (1.26.4)
Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (1.3.1)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (0.12.1)
Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (4.54.1)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (1.4.7)
Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (24.2)
Requirement already satisfied: pillow>=6.2.0 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (11.0.0)
Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (3.2.0)
Requirement already satisfied: python-dateutil>=2.7 in /usr/local/lib/python3.10/dist-packages (from matplotlib->tueplots==0.0.8) (2.8.2)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.10/dist-packages (from python-dateutil>=2.7->matplotlib->tueplots==0.0.8) (1.16.0)
Downloading tueplots-0.0.8-py3-none-any.whl (19 kB)
Installing collected packages: tueplots
Successfully installed tueplots-0.0.8
Collecting codespell==2.0.0
Downloading codespell-2.0.0-py3-none-any.whl.metadata (9.8 kB)
Downloading codespell-2.0.0-py3-none-any.whl (172 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 172.2/172.2 kB 4.6 MB/s eta 0:00:00
?25hInstalling collected packages: codespell
Successfully installed codespell-2.0.0
Collecting black==21.9b0
Downloading black-21.9b0-py3-none-any.whl.metadata (36 kB)
Requirement already satisfied: click>=7.1.2 in /usr/local/lib/python3.10/dist-packages (from black==21.9b0) (8.1.7)
Requirement already satisfied: platformdirs>=2 in /usr/local/lib/python3.10/dist-packages (from black==21.9b0) (4.3.6)
Collecting tomli<2.0.0,>=0.2.6 (from black==21.9b0)
Downloading tomli-1.2.3-py3-none-any.whl.metadata (9.1 kB)
Requirement already satisfied: regex>=2020.1.8 in /usr/local/lib/python3.10/dist-packages (from black==21.9b0) (2024.9.11)
Collecting pathspec<1,>=0.9.0 (from black==21.9b0)
Downloading pathspec-0.12.1-py3-none-any.whl.metadata (21 kB)
Requirement already satisfied: typing-extensions>=3.10.0.0 in /usr/local/lib/python3.10/dist-packages (from black==21.9b0) (4.12.2)
Collecting mypy-extensions>=0.4.3 (from black==21.9b0)
Downloading mypy_extensions-1.0.0-py3-none-any.whl.metadata (1.1 kB)
Downloading black-21.9b0-py3-none-any.whl (148 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 148.2/148.2 kB 9.1 MB/s eta 0:00:00
?25hDownloading mypy_extensions-1.0.0-py3-none-any.whl (4.7 kB)
Downloading pathspec-0.12.1-py3-none-any.whl (31 kB)
Downloading tomli-1.2.3-py3-none-any.whl (12 kB)
Installing collected packages: tomli, pathspec, mypy-extensions, black
Attempting uninstall: tomli
Found existing installation: tomli 2.1.0
Uninstalling tomli-2.1.0:
Successfully uninstalled tomli-2.1.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
sphinx 8.1.3 requires tomli>=2; python_version < "3.11", but you have tomli 1.2.3 which is incompatible.
Successfully installed black-21.9b0 mypy-extensions-1.0.0 pathspec-0.12.1 tomli-1.2.3
Collecting isort==5.9.3
Downloading isort-5.9.3-py3-none-any.whl.metadata (12 kB)
Downloading isort-5.9.3-py3-none-any.whl (106 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 106.1/106.1 kB 4.9 MB/s eta 0:00:00
?25hInstalling collected packages: isort
Successfully installed isort-5.9.3
Collecting flake8==4.0.1
Downloading flake8-4.0.1-py2.py3-none-any.whl.metadata (4.0 kB)
Collecting mccabe<0.7.0,>=0.6.0 (from flake8==4.0.1)
Downloading mccabe-0.6.1-py2.py3-none-any.whl.metadata (4.3 kB)
Collecting pycodestyle<2.9.0,>=2.8.0 (from flake8==4.0.1)
Downloading pycodestyle-2.8.0-py2.py3-none-any.whl.metadata (31 kB)
Collecting pyflakes<2.5.0,>=2.4.0 (from flake8==4.0.1)
Downloading pyflakes-2.4.0-py2.py3-none-any.whl.metadata (3.9 kB)
Downloading flake8-4.0.1-py2.py3-none-any.whl (64 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 64.1/64.1 kB 3.0 MB/s eta 0:00:00
?25hDownloading mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB)
Downloading pycodestyle-2.8.0-py2.py3-none-any.whl (42 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 42.1/42.1 kB 4.0 MB/s eta 0:00:00
?25hDownloading pyflakes-2.4.0-py2.py3-none-any.whl (69 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 69.7/69.7 kB 5.9 MB/s eta 0:00:00
?25hInstalling collected packages: mccabe, pyflakes, pycodestyle, flake8
Successfully installed flake8-4.0.1 mccabe-0.6.1 pycodestyle-2.8.0 pyflakes-2.4.0
Collecting pylint==3.0.3
Downloading pylint-3.0.3-py3-none-any.whl.metadata (12 kB)
Requirement already satisfied: platformdirs>=2.2.0 in /usr/local/lib/python3.10/dist-packages (from pylint==3.0.3) (4.3.6)
Collecting astroid<=3.1.0-dev0,>=3.0.1 (from pylint==3.0.3)
Downloading astroid-3.0.3-py3-none-any.whl.metadata (4.5 kB)
Requirement already satisfied: isort!=5.13.0,<6,>=4.2.5 in /usr/local/lib/python3.10/dist-packages (from pylint==3.0.3) (5.9.3)
Requirement already satisfied: mccabe<0.8,>=0.6 in /usr/local/lib/python3.10/dist-packages (from pylint==3.0.3) (0.6.1)
Collecting tomlkit>=0.10.1 (from pylint==3.0.3)
Downloading tomlkit-0.13.2-py3-none-any.whl.metadata (2.7 kB)
Collecting dill>=0.2 (from pylint==3.0.3)
Downloading dill-0.3.9-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from pylint==3.0.3) (1.2.3)
Requirement already satisfied: typing-extensions>=4.0.0 in /usr/local/lib/python3.10/dist-packages (from astroid<=3.1.0-dev0,>=3.0.1->pylint==3.0.3) (4.12.2)
Downloading pylint-3.0.3-py3-none-any.whl (510 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 510.6/510.6 kB 10.6 MB/s eta 0:00:00
?25hDownloading astroid-3.0.3-py3-none-any.whl (275 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 275.2/275.2 kB 24.3 MB/s eta 0:00:00
?25hDownloading dill-0.3.9-py3-none-any.whl (119 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 119.4/119.4 kB 11.3 MB/s eta 0:00:00
?25hDownloading tomlkit-0.13.2-py3-none-any.whl (37 kB)
Installing collected packages: tomlkit, dill, astroid, pylint
Successfully installed astroid-3.0.3 dill-0.3.9 pylint-3.0.3 tomlkit-0.13.2
Collecting pre-commit==2.15.0
Downloading pre_commit-2.15.0-py2.py3-none-any.whl.metadata (1.9 kB)
Collecting cfgv>=2.0.0 (from pre-commit==2.15.0)
Downloading cfgv-3.4.0-py2.py3-none-any.whl.metadata (8.5 kB)
Collecting identify>=1.0.0 (from pre-commit==2.15.0)
Downloading identify-2.6.2-py2.py3-none-any.whl.metadata (4.4 kB)
Collecting nodeenv>=0.11.1 (from pre-commit==2.15.0)
Downloading nodeenv-1.9.1-py2.py3-none-any.whl.metadata (21 kB)
Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.10/dist-packages (from pre-commit==2.15.0) (6.0.2)
Requirement already satisfied: toml in /usr/local/lib/python3.10/dist-packages (from pre-commit==2.15.0) (0.10.2)
Collecting virtualenv>=20.0.8 (from pre-commit==2.15.0)
Downloading virtualenv-20.27.1-py3-none-any.whl.metadata (4.5 kB)
Collecting distlib<1,>=0.3.7 (from virtualenv>=20.0.8->pre-commit==2.15.0)
Downloading distlib-0.3.9-py2.py3-none-any.whl.metadata (5.2 kB)
Requirement already satisfied: filelock<4,>=3.12.2 in /usr/local/lib/python3.10/dist-packages (from virtualenv>=20.0.8->pre-commit==2.15.0) (3.16.1)
Requirement already satisfied: platformdirs<5,>=3.9.1 in /usr/local/lib/python3.10/dist-packages (from virtualenv>=20.0.8->pre-commit==2.15.0) (4.3.6)
Downloading pre_commit-2.15.0-py2.py3-none-any.whl (191 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 191.5/191.5 kB 8.5 MB/s eta 0:00:00
?25hDownloading cfgv-3.4.0-py2.py3-none-any.whl (7.2 kB)
Downloading identify-2.6.2-py2.py3-none-any.whl (98 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 99.0/99.0 kB 9.5 MB/s eta 0:00:00
?25hDownloading nodeenv-1.9.1-py2.py3-none-any.whl (22 kB)
Downloading virtualenv-20.27.1-py3-none-any.whl (3.1 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.1/3.1 MB 63.0 MB/s eta 0:00:00
?25hDownloading distlib-0.3.9-py2.py3-none-any.whl (468 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 469.0/469.0 kB 37.3 MB/s eta 0:00:00
?25hInstalling collected packages: distlib, virtualenv, nodeenv, identify, cfgv, pre-commit
Successfully installed cfgv-3.4.0 distlib-0.3.9 identify-2.6.2 nodeenv-1.9.1 pre-commit-2.15.0 virtualenv-20.27.1
Looking in links: https://pytorch-geometric.com/whl/torch-2.0.1+cu117.html
Collecting pyg-lib==0.2.0
Downloading https://data.pyg.org/whl/torch-2.0.0%2Bcu117/pyg_lib-0.2.0%2Bpt20cu117-cp310-cp310-linux_x86_64.whl (1.4 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.4/1.4 MB 10.4 MB/s eta 0:00:00
?25hCollecting torch-scatter==2.1.1
Downloading https://data.pyg.org/whl/torch-2.0.0%2Bcu117/torch_scatter-2.1.1%2Bpt20cu117-cp310-cp310-linux_x86_64.whl (10.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 10.2/10.2 MB 34.5 MB/s eta 0:00:00
?25hCollecting torch-sparse==0.6.17
Downloading https://data.pyg.org/whl/torch-2.0.0%2Bcu117/torch_sparse-0.6.17%2Bpt20cu117-cp310-cp310-linux_x86_64.whl (4.8 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 4.8/4.8 MB 25.2 MB/s eta 0:00:00
?25hCollecting torch-cluster==1.6.1
Downloading https://data.pyg.org/whl/torch-2.0.0%2Bcu117/torch_cluster-1.6.1%2Bpt20cu117-cp310-cp310-linux_x86_64.whl (3.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.3/3.3 MB 19.4 MB/s eta 0:00:00
?25hCollecting torch-geometric==2.3.1
Downloading torch_geometric-2.3.1.tar.gz (661 kB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 661.6/661.6 kB 13.8 MB/s eta 0:00:00
?25h Installing build dependencies ... ?25l?25hdone
Getting requirements to build wheel ... ?25l?25hdone
Preparing metadata (pyproject.toml) ... ?25l?25hdone
Requirement already satisfied: scipy in /usr/local/lib/python3.10/dist-packages (from torch-sparse==0.6.17) (1.13.1)
Requirement already satisfied: tqdm in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (4.66.6)
Requirement already satisfied: numpy in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (1.26.4)
Requirement already satisfied: jinja2 in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (3.1.4)
Requirement already satisfied: requests in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (2.32.3)
Requirement already satisfied: pyparsing in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (3.2.0)
Requirement already satisfied: scikit-learn in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (1.5.2)
Requirement already satisfied: psutil>=5.8.0 in /usr/local/lib/python3.10/dist-packages (from torch-geometric==2.3.1) (5.9.5)
Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.10/dist-packages (from jinja2->torch-geometric==2.3.1) (3.0.2)
Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.10/dist-packages (from requests->torch-geometric==2.3.1) (3.4.0)
Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.10/dist-packages (from requests->torch-geometric==2.3.1) (3.10)
Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.10/dist-packages (from requests->torch-geometric==2.3.1) (2.2.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.10/dist-packages (from requests->torch-geometric==2.3.1) (2024.8.30)
Requirement already satisfied: joblib>=1.2.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->torch-geometric==2.3.1) (1.4.2)
Requirement already satisfied: threadpoolctl>=3.1.0 in /usr/local/lib/python3.10/dist-packages (from scikit-learn->torch-geometric==2.3.1) (3.5.0)
Building wheels for collected packages: torch-geometric
Building wheel for torch-geometric (pyproject.toml) ... ?25l?25hdone
Created wheel for torch-geometric: filename=torch_geometric-2.3.1-py3-none-any.whl size=910459 sha256=35434d71c83fdc05973d2c6dd3204187657e0a1070a5e5ebb197683ce82e59fd
Stored in directory: /root/.cache/pip/wheels/ac/dc/30/e2874821ff308ee67dcd7a66dbde912411e19e35a1addda028
Successfully built torch-geometric
Installing collected packages: torch-scatter, pyg-lib, torch-sparse, torch-cluster, torch-geometric
Successfully installed pyg-lib-0.2.0+pt20cu117 torch-cluster-1.6.1+pt20cu117 torch-geometric-2.3.1 torch-scatter-2.1.1+pt20cu117 torch-sparse-0.6.17+pt20cu117
8.7.3. 8.3.3 Question 01: Create graph#
As we learnt, if we want to implement GNNs, we need to create the graph first. Thus, we should define the function for creating the graphs.
In this question, we will guide you work on four parts:
Visualize the graph structure in the context of the larger program
Save the edges‘ index and features via Torch
Creates a 2D directed graph with a regular grid structure
Save the code in your work path
# Standard library
import os
from argparse import ArgumentParser
# Third-party
import matplotlib
import matplotlib.pyplot as plt
import networkx
import numpy as np
import scipy.spatial
import torch
import torch_geometric as pyg
from torch_geometric.utils.convert import from_networkx
def plot_graph(graph, title=None):
fig, axis = plt.subplots(figsize=(8, 8), dpi=200) # W,H
edge_index = graph.edge_index
pos = graph.pos
# Fix for re-indexed edge indices only containing mesh nodes at
# higher levels in hierarchy
edge_index = edge_index - edge_index.min()
if pyg.utils.is_undirected(edge_index):
# Keep only 1 direction of edge_index
edge_index = edge_index[:, edge_index[0] < edge_index[1]] # (2, M/2)
# TODO: indicate direction of directed edges
# Move all to cpu and numpy, compute (in)-degrees
############# Your code here ############
## Note:
## 1: Please use the function 'pyg.utils.degree' to compute the degrees
## 2: Remember to move degrees, edge index, and pos to cpu and then convert them as numpy arrays
#########################################
# Plot edges
from_pos = pos[edge_index[0]] # (M/2, 2)
to_pos = pos[edge_index[1]] # (M/2, 2)
edge_lines = np.stack((from_pos, to_pos), axis=1)
axis.add_collection(
matplotlib.collections.LineCollection(
edge_lines, lw=0.4, colors="black", zorder=1
)
)
# Plot nodes
node_scatter = axis.scatter(
pos[:, 0],
pos[:, 1],
c=degrees,
s=3,
marker="o",
zorder=2,
cmap="viridis",
clim=None,
)
plt.colorbar(node_scatter, aspect=50)
if title is not None:
axis.set_title(title)
return fig, axis
def sort_nodes_internally(nx_graph):
# For some reason the networkx .nodes() return list can not be sorted,
# but this is the ordering used by pyg when converting.
# This function fixes this.
H = networkx.DiGraph()
H.add_nodes_from(sorted(nx_graph.nodes(data=True)))
H.add_edges_from(nx_graph.edges(data=True))
return H
def save_edges(graph, name, base_path):
############# Your code here ############
## Note:
## 1: Use the torch.save function to save the edge indices of the graph as '.pt' file in the specified base path
## 2: Create a tensor of edge features by concatenating two attributes: length of each edge 'len.unsqueeze(1)' and vector difference 'vdiff', save as float32
## 3: Same as before, remember to save the edge features tensor created in Step 2, and also need to be saved as the '.pt' file in the specified base path
## Hint: If necessary, you can check the below function for how to save edges list to prepare this one
#########################################
def save_edges_list(graphs, name, base_path):
torch.save(
[graph.edge_index for graph in graphs],
os.path.join(base_path, f"{name}_edge_index.pt"),
)
edge_features = [
torch.cat((graph.len.unsqueeze(1), graph.vdiff), dim=1).to(
torch.float32
)
for graph in graphs
] # Save as float32
torch.save(edge_features, os.path.join(base_path, f"{name}_features.pt"))
def from_networkx_with_start_index(nx_graph, start_index):
pyg_graph = from_networkx(nx_graph)
pyg_graph.edge_index += start_index
return pyg_graph
# This mk_2d_graph function essentially creates a 2D directed graph with a regular grid structure, including diagonal connections.
# Each edge has associated length and vector difference attributes, which can be useful for various graph-based computations or analyses.
def mk_2d_graph(xy, nx, ny):
xm, xM = np.amin(xy[0][0, :]), np.amax(xy[0][0, :])
ym, yM = np.amin(xy[1][:, 0]), np.amax(xy[1][:, 0])
# avoid nodes on border,it calculates a grid with nodes slightly inset from the edges, then will assign the position of the nodes:
dx = (xM - xm) / nx
dy = (yM - ym) / ny
lx = np.linspace(xm + dx / 2, xM - dx / 2, nx)
ly = np.linspace(ym + dy / 2, yM - dy / 2, ny)
############# Your code here ############
## Note:
## 1: Create the 2D meshgrid mg using numpy function
## 2: Create the 2D grid graph using networkx function
## 3: Assign the node positions based on the meshgrid corrdinates through 'for' loop
#########################################
# add diagonal edges
g.add_edges_from(
[((x, y), (x + 1, y + 1)) for x in range(nx - 1) for y in range(ny - 1)]
+ [
((x + 1, y), (x, y + 1))
for x in range(nx - 1)
for y in range(ny - 1)
]
)
############# Your code here ############
## Note:
## 1: The graph should be converted to a directed graph 'dg' (Hint: considering the function in networkx)
## 2: For each edge in the original undirected graph, we can use the nodes 'u' and 'v' to loop all:
## - The edge length (len) is calculated as the Euclidean distance between nodes.
## - The vector difference (vdiff) between node positions is calculated.
## - Edges are added in both directions (making it directed) with the same length but opposite vector differences.
## 3: Remember to calculate all the edges
#########################################
def prepend_node_index(graph, new_index):
# Relabel node indices in graph, insert (graph_level, i, j)
ijk = [tuple((new_index,) + x) for x in graph.nodes]
to_mapping = dict(zip(graph.nodes, ijk))
return networkx.relabel_nodes(graph, to_mapping, copy=True)
def main():
parser = ArgumentParser(description="Graph generation arguments")
parser.add_argument(
"--dataset",
type=str,
default="meps_example",
help="Dataset to load grid point coordinates from "
"(default: meps_example)",
)
parser.add_argument(
"--graph",
type=str,
default="multiscale",
help="Name to save graph as (default: multiscale)",
)
parser.add_argument(
"--plot",
type=int,
default=0,
help="If graphs should be plotted during generation "
"(default: 0 (false))",
)
parser.add_argument(
"--levels",
type=int,
help="Limit multi-scale mesh to given number of levels, "
"from bottom up (default: None (no limit))",
)
parser.add_argument(
"--hierarchical",
type=int,
default=0,
help="Generate hierarchical mesh graph (default: 0, no)",
)
args = parser.parse_args()
# Load grid positions
static_dir_path = os.path.join("data", args.dataset, "static")
graph_dir_path = os.path.join("graphs", args.graph)
os.makedirs(graph_dir_path, exist_ok=True)
xy = np.load(os.path.join(static_dir_path, "nwp_xy.npy"))
grid_xy = torch.tensor(xy)
pos_max = torch.max(torch.abs(grid_xy))
#
# Mesh
#
# graph geometry
nx = 3 # number of children = nx**2
nlev = int(np.log(max(xy.shape)) / np.log(nx))
nleaf = nx**nlev # leaves at the bottom = nleaf**2
mesh_levels = nlev - 1
if args.levels:
# Limit the levels in mesh graph
mesh_levels = min(mesh_levels, args.levels)
print(f"nlev: {nlev}, nleaf: {nleaf}, mesh_levels: {mesh_levels}")
# multi resolution tree levels
G = []
for lev in range(1, mesh_levels + 1):
n = int(nleaf / (nx**lev))
g = mk_2d_graph(xy, n, n)
if args.plot:
plot_graph(from_networkx(g), title=f"Mesh graph, level {lev}")
plt.show()
G.append(g)
if args.hierarchical:
# Relabel nodes of each level with level index first
G = [
prepend_node_index(graph, level_i)
for level_i, graph in enumerate(G)
]
num_nodes_level = np.array([len(g_level.nodes) for g_level in G])
# First node index in each level in the hierarchical graph
first_index_level = np.concatenate(
(np.zeros(1, dtype=int), np.cumsum(num_nodes_level[:-1]))
)
# Create inter-level mesh edges
up_graphs = []
down_graphs = []
for from_level, to_level, G_from, G_to, start_index in zip(
range(1, mesh_levels),
range(0, mesh_levels - 1),
G[1:],
G[:-1],
first_index_level[: mesh_levels - 1],
):
# start out from graph at from level
G_down = G_from.copy()
G_down.clear_edges()
G_down = networkx.DiGraph(G_down)
# Add nodes of to level
G_down.add_nodes_from(G_to.nodes(data=True))
# build kd tree for mesh point pos
# order in vm should be same as in vm_xy
v_to_list = list(G_to.nodes)
v_from_list = list(G_from.nodes)
v_from_xy = np.array([xy for _, xy in G_from.nodes.data("pos")])
kdt_m = scipy.spatial.KDTree(v_from_xy)
# add edges from mesh to grid
for v in v_to_list:
# find 1(?) nearest neighbours (index to vm_xy)
neigh_idx = kdt_m.query(G_down.nodes[v]["pos"], 1)[1]
u = v_from_list[neigh_idx]
# add edge from mesh to grid
G_down.add_edge(u, v)
d = np.sqrt(
np.sum(
(G_down.nodes[u]["pos"] - G_down.nodes[v]["pos"]) ** 2
)
)
G_down.edges[u, v]["len"] = d
G_down.edges[u, v]["vdiff"] = (
G_down.nodes[u]["pos"] - G_down.nodes[v]["pos"]
)
# relabel nodes to integers (sorted)
G_down_int = networkx.convert_node_labels_to_integers(
G_down, first_label=start_index, ordering="sorted"
) # Issue with sorting here
G_down_int = sort_nodes_internally(G_down_int)
pyg_down = from_networkx_with_start_index(G_down_int, start_index)
# Create up graph, invert downwards edges
up_edges = torch.stack(
(pyg_down.edge_index[1], pyg_down.edge_index[0]), dim=0
)
pyg_up = pyg_down.clone()
pyg_up.edge_index = up_edges
up_graphs.append(pyg_up)
down_graphs.append(pyg_down)
if args.plot:
plot_graph(
pyg_down, title=f"Down graph, {from_level} -> {to_level}"
)
plt.show()
plot_graph(
pyg_down, title=f"Up graph, {to_level} -> {from_level}"
)
plt.show()
# Save up and down edges
save_edges_list(up_graphs, "mesh_up", graph_dir_path)
save_edges_list(down_graphs, "mesh_down", graph_dir_path)
# Extract intra-level edges for m2m
m2m_graphs = [
from_networkx_with_start_index(
networkx.convert_node_labels_to_integers(
level_graph, first_label=start_index, ordering="sorted"
),
start_index,
)
for level_graph, start_index in zip(G, first_index_level)
]
mesh_pos = [graph.pos.to(torch.float32) for graph in m2m_graphs]
# For use in g2m and m2g
G_bottom_mesh = G[0]
joint_mesh_graph = networkx.union_all([graph for graph in G])
all_mesh_nodes = joint_mesh_graph.nodes(data=True)
else:
# combine all levels to one graph
G_tot = G[0]
for lev in range(1, len(G)):
nodes = list(G[lev - 1].nodes)
n = int(np.sqrt(len(nodes)))
ij = (
np.array(nodes)
.reshape((n, n, 2))[1::nx, 1::nx, :]
.reshape(int(n / nx) ** 2, 2)
)
ij = [tuple(x) for x in ij]
G[lev] = networkx.relabel_nodes(G[lev], dict(zip(G[lev].nodes, ij)))
G_tot = networkx.compose(G_tot, G[lev])
# Relabel mesh nodes to start with 0
G_tot = prepend_node_index(G_tot, 0)
# relabel nodes to integers (sorted)
G_int = networkx.convert_node_labels_to_integers(
G_tot, first_label=0, ordering="sorted"
)
# Graph to use in g2m and m2g
G_bottom_mesh = G_tot
all_mesh_nodes = G_tot.nodes(data=True)
# export the nx graph to PyTorch geometric
pyg_m2m = from_networkx(G_int)
m2m_graphs = [pyg_m2m]
mesh_pos = [pyg_m2m.pos.to(torch.float32)]
if args.plot:
plot_graph(pyg_m2m, title="Mesh-to-mesh")
plt.show()
# Save m2m edges
save_edges_list(m2m_graphs, "m2m", graph_dir_path)
# Divide mesh node pos by max coordinate of grid cell
mesh_pos = [pos / pos_max for pos in mesh_pos]
# Save mesh positions
torch.save(
mesh_pos, os.path.join(graph_dir_path, "mesh_features.pt")
) # mesh pos, in float32
#
# Grid2Mesh
#
# radius within which grid nodes are associated with a mesh node
# (in terms of mesh distance)
DM_SCALE = 0.67
# mesh nodes on lowest level
vm = G_bottom_mesh.nodes
vm_xy = np.array([xy for _, xy in vm.data("pos")])
# distance between mesh nodes
dm = np.sqrt(
np.sum((vm.data("pos")[(0, 1, 0)] - vm.data("pos")[(0, 0, 0)]) ** 2)
)
# grid nodes
Ny, Nx = xy.shape[1:]
G_grid = networkx.grid_2d_graph(Ny, Nx)
G_grid.clear_edges()
# vg features (only pos introduced here)
for node in G_grid.nodes:
# pos is in feature but here explicit for convenience
G_grid.nodes[node]["pos"] = np.array([xy[0][node], xy[1][node]])
# add 1000 to node key to separate grid nodes (1000,i,j) from mesh nodes
# (i,j) and impose sorting order such that vm are the first nodes
G_grid = prepend_node_index(G_grid, 1000)
# build kd tree for grid point pos
# order in vg_list should be same as in vg_xy
vg_list = list(G_grid.nodes)
vg_xy = np.array([[xy[0][node[1:]], xy[1][node[1:]]] for node in vg_list])
kdt_g = scipy.spatial.KDTree(vg_xy)
# now add (all) mesh nodes, include features (pos)
G_grid.add_nodes_from(all_mesh_nodes)
# Re-create graph with sorted node indices
# Need to do sorting of nodes this way for indices to map correctly to pyg
G_g2m = networkx.Graph()
G_g2m.add_nodes_from(sorted(G_grid.nodes(data=True)))
# turn into directed graph
G_g2m = networkx.DiGraph(G_g2m)
# add edges
for v in vm:
# find neighbours (index to vg_xy)
neigh_idxs = kdt_g.query_ball_point(vm[v]["pos"], dm * DM_SCALE)
for i in neigh_idxs:
u = vg_list[i]
# add edge from grid to mesh
G_g2m.add_edge(u, v)
d = np.sqrt(
np.sum((G_g2m.nodes[u]["pos"] - G_g2m.nodes[v]["pos"]) ** 2)
)
G_g2m.edges[u, v]["len"] = d
G_g2m.edges[u, v]["vdiff"] = (
G_g2m.nodes[u]["pos"] - G_g2m.nodes[v]["pos"]
)
pyg_g2m = from_networkx(G_g2m)
if args.plot:
plot_graph(pyg_g2m, title="Grid-to-mesh")
plt.show()
#
# Mesh2Grid
#
# start out from Grid2Mesh and then replace edges
G_m2g = G_g2m.copy()
G_m2g.clear_edges()
# build kd tree for mesh point pos
# order in vm should be same as in vm_xy
vm_list = list(vm)
kdt_m = scipy.spatial.KDTree(vm_xy)
# add edges from mesh to grid
for v in vg_list:
# find 4 nearest neighbours (index to vm_xy)
neigh_idxs = kdt_m.query(G_m2g.nodes[v]["pos"], 4)[1]
for i in neigh_idxs:
u = vm_list[i]
# add edge from mesh to grid
G_m2g.add_edge(u, v)
d = np.sqrt(
np.sum((G_m2g.nodes[u]["pos"] - G_m2g.nodes[v]["pos"]) ** 2)
)
G_m2g.edges[u, v]["len"] = d
G_m2g.edges[u, v]["vdiff"] = (
G_m2g.nodes[u]["pos"] - G_m2g.nodes[v]["pos"]
)
# relabel nodes to integers (sorted)
G_m2g_int = networkx.convert_node_labels_to_integers(
G_m2g, first_label=0, ordering="sorted"
)
pyg_m2g = from_networkx(G_m2g_int)
if args.plot:
plot_graph(pyg_m2g, title="Mesh-to-grid")
plt.show()
# Save g2m and m2g everything
# g2m
save_edges(pyg_g2m, "g2m", graph_dir_path)
# m2g
save_edges(pyg_m2g, "m2g", graph_dir_path)
if __name__ == "__main__":
main()
Now, after editing the previous code block, we should save it as the ‘.py’ file for the further prediction.
Remember to save it as ‘create_mesh.py’ and in the correct path.
############# Your code here ############
## Note:
## 1: Save the previous code block as '.py' file, named it as 'create_mesh.py'.
## 2: Save it in the correct path (i.e., .../neural-lam-prob_model_lam/)
#########################################
8.7.4. 8.3.4 Question 02: Create grid features#
Meanwhile, based on the previous grids and graph, we also need to have the features.
In this question, we will guide you to design the function to pre-compute and save static features related to a grid, which will be used for the next GNN tasks:
Load, flatten, and normalize the necessary features
Load and process grid features
Load geopotential features
Border mask
Combine these features
Save pre-computed features
# Standard library
import os
from argparse import ArgumentParser
# Third-party
import numpy as np
import torch
def main():
"""
Pre-compute all static features related to the grid nodes
"""
parser = ArgumentParser(description="Training arguments")
parser.add_argument(
"--dataset",
type=str,
default="meps_example",
help="Dataset to compute weights for (default: meps_example)",
)
args = parser.parse_args()
static_dir_path = os.path.join("data", args.dataset, "static")
# -- Static grid node features --
############# Your code here ############
## Note:
## 1: You should use the torch.tensor to load all necessary data.
## 2: You can find the related static features in this path (as indicated before: ".../neural-lam-prob_model_lam/data/meps_example/static/...")
## 3: When you load them, you can check the numpy array first for the flatten steps.
## 4: 'torch.cat' function will help you to concatenate grid features.
## 5: Remember to save the features in the static path.
#########################################
# Grid Coordinates
# Geopotential Height
# Border mask
# Concatenate grid features
# Save grid features
if __name__ == "__main__":
main()
Now, after editing the previous code block, we should save it as the ‘.py’ file for the further prediction.
Remember to save it as ‘create_grid_features.py’ and in the correct path.
############# Your code here ############
## Note:
## 1: Save the previous code block as '.py' file, named it as 'create_grid_features.py'.
## 2: Save it in the correct path (i.e., .../neural-lam-prob_model_lam/neural_lam/)
#########################################
8.7.5. 8.3.5 Question 03: Create parameter weight#
Perfect! Now we have the function to create the graph and grid features.
Although we will use the trained models for GNNs prediction, we also need to understand the parameter weights and related statistics for each paramaters. In our exercise, we will get the statistics (i.e., mean and standard deviation) for the weather parameter (e.g., temperature) and flux forcing (i.e., specific components of weather models)
Meanwhile, the one-step differences method is also essential for us, because when we computes statistical metrics for time-step differences, it will help ue to capture temporal changes in the data.
That’s the purposes of this question for you to follow the exercise.
To evaluate these GNN models, the authors conduct expriments on both global and limited area forecasting. The models are implemented in PyTorch and trained on 8 A100 80GB GPUs in a data-parallel configuration. Training takes 700-1400 total GPU-hours for the global models, and around half of that for the limited area models.
That is the reason why they implement the batched sampling on a single GPU to reduce the resource demand. Using this strategy, 80 ensemble members are produced in 200s (2.5s per member) for global forecasting.
# Standard library
import os
from argparse import ArgumentParser
# Third-party
import numpy as np
import torch
from tqdm import tqdm
# First-party
from neural_lam import constants
from neural_lam.weather_dataset import WeatherDataset
def main():
"""
Pre-compute parameter weights to be used in loss function
"""
# Part 01: Argument parsing
parser = ArgumentParser(description="Training arguments")
parser.add_argument(
"--dataset",
type=str,
default="meps_example",
help="Dataset to compute weights for (default: meps_example)",
)
parser.add_argument(
"--batch_size",
type=int,
default=32,
help="Batch size when iterating over the dataset",
)
parser.add_argument(
"--step_length",
type=int,
default=3,
help="Step length in hours to consider single time step (default: 3)",
)
parser.add_argument(
"--n_workers",
type=int,
default=4,
help="Number of workers in data loader (default: 4)",
)
args = parser.parse_args()
static_dir_path = os.path.join("data", args.dataset, "static")
# Part 02: Pre-compute parameter weights
# Create parameter weights based on height
# based on fig A.1 in graph cast paper
w_dict = {
"2": 1.0,
"0": 0.1,
"65": 0.065,
"1000": 0.1,
"850": 0.05,
"500": 0.03,
}
w_list = np.array(
[w_dict[par.split("_")[-2]] for par in constants.PARAM_NAMES]
)
print("Saving parameter weights...")
np.save(
os.path.join(static_dir_path, "parameter_weights.npy"),
w_list.astype("float32"),
)
# Part 03: Calculate necessary statistics
# Load dataset without any subsampling
ds = WeatherDataset(
args.dataset,
split="train",
subsample_step=1,
pred_length=63,
standardize=False,
) # Without standardization
# Divide the dataset into batches for efficient processing
loader = torch.utils.data.DataLoader(
ds, args.batch_size, shuffle=False, num_workers=args.n_workers
)
# Compute mean and std.-dev. of each parameter (+ flux forcing)
# across full dataset
print("Computing mean and std.-dev. for parameters...")
means = []
squares = []
flux_means = []
flux_squares = []
############# Your code here ############
## Note:
## 1: Based on the previous batches division, you should use the 'init_batch', 'target_batch', and 'forcing_batch' to finish the process.
## 2: Meanwhile, you also need to use the 'tqdm()' function to proceed the loop, which is good to check the progressbar.
## 3: Remember to use the 'torch.cat', which is always useful, especially for you to prepare the batch including the initial and target batch.
## 4: Compute the mean and squared values for each feature across the dataset via Torch.
## 5: Computes mean and squared values for the forcing, then remember to extract them into the forcing batch.
## 6: Finally aggregate batch-wise statistics to compute global mean and standard deviation, and also flux part (Hint: you can use 'torch.stack()' function).
## 7: If necessary, you can check the below loop for the one-difference method to get the similar logic for this loop.
#########################################
for init_batch, target_batch, forcing_batch in tqdm(loader):
############# Your code here ############
#########################################
# Flux at 1st windowed position is index 1 in forcing
flux_batch = forcing_batch[:, :, :, 1]
############# Your code here ############
#########################################
############# Your code here ############
#########################################
print("Saving mean, std.-dev, flux_stats...")
torch.save(mean, os.path.join(static_dir_path, "parameter_mean.pt"))
torch.save(std, os.path.join(static_dir_path, "parameter_std.pt"))
torch.save(flux_stats, os.path.join(static_dir_path, "flux_stats.pt"))
# Compute mean and std.-dev. of one-step differences across the dataset
print("Computing mean and std.-dev. for one-step differences...")
ds_standard = WeatherDataset(
args.dataset,
split="train",
subsample_step=1,
pred_length=63,
standardize=True,
) # Re-load with standardization
loader_standard = torch.utils.data.DataLoader(
ds_standard, args.batch_size, shuffle=False, num_workers=args.n_workers
)
used_subsample_len = (65 // args.step_length) * args.step_length
diff_means = []
diff_squares = []
for init_batch, target_batch, _ in tqdm(loader_standard):
batch = torch.cat(
(init_batch, target_batch), dim=1
) # (N_batch, N_t', N_grid, d_features)
# Note: batch contains only 1h-steps
stepped_batch = torch.cat(
[
batch[:, ss_i : used_subsample_len : args.step_length]
for ss_i in range(args.step_length)
],
dim=0,
)
# (N_batch', N_t, N_grid, d_features),
# N_batch' = args.step_length*N_batch
batch_diffs = stepped_batch[:, 1:] - stepped_batch[:, :-1]
# (N_batch', N_t-1, N_grid, d_features)
diff_means.append(
torch.mean(batch_diffs, dim=(1, 2))
) # (N_batch', d_features,)
diff_squares.append(
torch.mean(batch_diffs**2, dim=(1, 2))
) # (N_batch', d_features,)
diff_mean = torch.mean(torch.cat(diff_means, dim=0), dim=0) # (d_features)
diff_second_moment = torch.mean(torch.cat(diff_squares, dim=0), dim=0)
diff_std = torch.sqrt(diff_second_moment - diff_mean**2) # (d_features)
print("Saving one-step difference mean and std.-dev...")
torch.save(diff_mean, os.path.join(static_dir_path, "diff_mean.pt"))
torch.save(diff_std, os.path.join(static_dir_path, "diff_std.pt"))
if __name__ == "__main__":
main()
Now, after editing the previous code block, we should save it as the ‘.py’ file for the further prediction.
Remember to save it as ‘create_parameter_weight.py’ and in the correct path.
############# Your code here ############
## Note:
## 1: Save the previous code block as '.py' file, named it as 'create_parameter_weight.py'.
## 2: Save it in the correct path (i.e., .../neural-lam-prob_model_lam/)
#########################################
8.7.6. 8.3.6 Train models#
After long exploration, we can start to train our GNN models finally. Take care yourself and prepare some coffee because it will cost long time!
In this exercise, we will mainly train two GNN models and compare them. One is the GraphCast, the other one is the Graph-FM (Deterministic model using hierarchical graph).
GraphCast, a state-of-the-art AI model able to make medium-range weather forecasts with unprecedented accuracy (i.e., predicting weather conditions up to 10 days in advance ). Meanwhile, it can also offer earlier warnings of extreme weather events. It can predict the tracks of cyclones with great accuracy further into the future, identifies atmospheric rivers associated with flood risk, and predicts the onset of extreme temperatures.
While most existing Neural Weather Prediction (NeurWP) methods focus on global forecasting, an important question is how these techniques can be applied to limited area modeling (LAM). Thus, the authors adapt the graph-based NeurWP approach to the limited area setting and propose a multi-scale hierarchical model extension for Graph-FM model, which is validated by experiments with a local model for the Nordic region. Although they also invented the probabilistic weather forecasting model, we did not implement it within short-term training.
As the below figure indicated, the Inputs of Graph-FM at grid nodes (i.e., orange squares) are encoded to mesh nodes (i.e., blue circles), processed and decoded back to produce a one step prediction.
Notably, you should create the wandb account via your github. It is useful for you to visualize the process of the modelling and check the initial results. In the meantime, the training epochs are set up as 100 to save the time. Certainly, you can extend it to more epochs (e.g. 200).
# Set up the working path, then get the grid features
import os
os.chdir('/content/drive/MyDrive/neural-lam-prob_model_lam/neural_lam')
!python create_grid_features.py
# Get the parameter weights
!python /content/drive/MyDrive/neural-lam-prob_model_lam/create_parameter_weights.py
Saving parameter weights...
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Computing mean and std.-dev. for parameters...
100% 1/1 [00:32<00:00, 32.88s/it]
Saving mean, std.-dev, flux_stats...
Computing mean and std.-dev. for one-step differences...
100% 1/1 [00:14<00:00, 14.67s/it]
Saving one-step difference mean and std.-dev...
# Get the multi-scale graphs
!python create_mesh.py --graph multiscale
nlev: 5, nleaf: 243, mesh_levels: 4
/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/convert.py:249: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:245.)
data[key] = torch.tensor(value)
# Get the hierarchical graphs
# A hierarchical graph can have multiple levels.
# It uses hierarchy to represent the same environment at different levels of details and can potentially reduce exponential complexity problems.
!python create_mesh.py --graph hierarchical --hierarchical 1 --levels 3
nlev: 5, nleaf: 243, mesh_levels: 3
/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/convert.py:249: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:245.)
data[key] = torch.tensor(value)
# Graphcast model
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graphcast
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 70345 nodes (63784 grid, 6561 mesh)
Edges in subgraphs: m2m=57616, g2m=100656, m2g=255136
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: 2
wandb: You chose 'Use an existing W&B account'
wandb: Logging into wandb.ai. (Learn how to deploy a W&B server locally: https://wandb.me/wandb-server)
wandb: You can find your API key in your browser here: https://wandb.ai/authorize
wandb: Paste an API key from your profile and hit enter, or press ctrl+c to quit:
wandb: Appending key for api.wandb.ai to your netrc file: /root/.netrc
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_114936-dk8rr4bl
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run graphcast-4x64-11_19_11-2215
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/dk8rr4bl
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
--------------------------------------------------------
0 | grid_embedder | Sequential | 7.8 K
1 | g2m_embedder | Sequential | 4.5 K
2 | m2g_embedder | Sequential | 4.5 K
3 | g2m_gnn | InteractionNet | 29.2 K
4 | encoding_grid_mlp | Sequential | 8.4 K
5 | m2g_gnn | InteractionNet | 29.2 K
6 | output_map | Sequential | 5.3 K
7 | mesh_embedder | Sequential | 4.5 K
8 | m2m_embedder | Sequential | 4.5 K
9 | processor | Sequential_514e2b | 116 K
--------------------------------------------------------
214 K Trainable params
0 Non-trainable params
214 K Total params
0.859 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 0: 100% 1/1 [00:07<00:00, 7.70s/it, v_num=r4bl, train_loss_step=5.360]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 0: 100% 1/1 [00:28<00:00, 28.61s/it, v_num=r4bl, train_loss_step=5.360]
Epoch 1: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=5.360, train_loss_epoch=5.360]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 1: 100% 1/1 [00:06<00:00, 6.69s/it, v_num=r4bl, train_loss_step=4.020, train_loss_epoch=5.360]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 1: 100% 1/1 [00:27<00:00, 27.75s/it, v_num=r4bl, train_loss_step=4.020, train_loss_epoch=5.360]
Epoch 2: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.020, train_loss_epoch=4.020]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 2: 100% 1/1 [00:05<00:00, 5.40s/it, v_num=r4bl, train_loss_step=4.150, train_loss_epoch=4.020]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 2: 100% 1/1 [00:24<00:00, 24.62s/it, v_num=r4bl, train_loss_step=4.150, train_loss_epoch=4.020]
Epoch 3: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.150, train_loss_epoch=4.150]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 3: 100% 1/1 [00:04<00:00, 4.43s/it, v_num=r4bl, train_loss_step=5.420, train_loss_epoch=4.150]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 3: 100% 1/1 [00:24<00:00, 24.03s/it, v_num=r4bl, train_loss_step=5.420, train_loss_epoch=4.150]
Epoch 4: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=5.420, train_loss_epoch=5.420]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 4: 100% 1/1 [00:04<00:00, 4.58s/it, v_num=r4bl, train_loss_step=5.750, train_loss_epoch=5.420]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 4: 100% 1/1 [00:26<00:00, 26.65s/it, v_num=r4bl, train_loss_step=5.750, train_loss_epoch=5.420]
Epoch 5: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=5.750, train_loss_epoch=5.750]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 5: 100% 1/1 [00:05<00:00, 5.46s/it, v_num=r4bl, train_loss_step=4.310, train_loss_epoch=5.750]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 5: 100% 1/1 [00:24<00:00, 24.48s/it, v_num=r4bl, train_loss_step=4.310, train_loss_epoch=5.750]
Epoch 6: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.310, train_loss_epoch=4.310]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 6: 100% 1/1 [00:04<00:00, 4.47s/it, v_num=r4bl, train_loss_step=4.920, train_loss_epoch=4.310]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 6: 100% 1/1 [00:23<00:00, 23.88s/it, v_num=r4bl, train_loss_step=4.920, train_loss_epoch=4.310]
Epoch 7: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.920, train_loss_epoch=4.920]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 7: 100% 1/1 [00:04<00:00, 4.52s/it, v_num=r4bl, train_loss_step=3.960, train_loss_epoch=4.920]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 7: 100% 1/1 [00:26<00:00, 26.90s/it, v_num=r4bl, train_loss_step=3.960, train_loss_epoch=4.920]
Epoch 8: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.960, train_loss_epoch=3.960]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 8: 100% 1/1 [00:06<00:00, 6.39s/it, v_num=r4bl, train_loss_step=3.320, train_loss_epoch=3.960]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 8: 100% 1/1 [00:26<00:00, 26.38s/it, v_num=r4bl, train_loss_step=3.320, train_loss_epoch=3.960]
Epoch 9: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.320, train_loss_epoch=3.320]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 9: 100% 1/1 [00:04<00:00, 4.69s/it, v_num=r4bl, train_loss_step=4.460, train_loss_epoch=3.320]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 9: 100% 1/1 [00:23<00:00, 23.97s/it, v_num=r4bl, train_loss_step=4.460, train_loss_epoch=3.320]
Epoch 10: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.460, train_loss_epoch=4.460]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 10: 100% 1/1 [00:04<00:00, 4.63s/it, v_num=r4bl, train_loss_step=3.260, train_loss_epoch=4.460]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 10: 100% 1/1 [00:25<00:00, 25.94s/it, v_num=r4bl, train_loss_step=3.260, train_loss_epoch=4.460]
Epoch 11: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.260, train_loss_epoch=3.260]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 11: 100% 1/1 [00:06<00:00, 6.39s/it, v_num=r4bl, train_loss_step=4.800, train_loss_epoch=3.260]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 11: 100% 1/1 [00:27<00:00, 27.87s/it, v_num=r4bl, train_loss_step=4.800, train_loss_epoch=3.260]
Epoch 12: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.800, train_loss_epoch=4.800]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 12: 100% 1/1 [00:04<00:00, 4.71s/it, v_num=r4bl, train_loss_step=3.760, train_loss_epoch=4.800]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 12: 100% 1/1 [00:23<00:00, 24.00s/it, v_num=r4bl, train_loss_step=3.760, train_loss_epoch=4.800]
Epoch 13: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.760, train_loss_epoch=3.760]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 13: 100% 1/1 [00:04<00:00, 4.64s/it, v_num=r4bl, train_loss_step=4.580, train_loss_epoch=3.760]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 13: 100% 1/1 [00:25<00:00, 25.22s/it, v_num=r4bl, train_loss_step=4.580, train_loss_epoch=3.760]
Epoch 14: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=4.580, train_loss_epoch=4.580]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 14: 100% 1/1 [00:05<00:00, 5.81s/it, v_num=r4bl, train_loss_step=3.110, train_loss_epoch=4.580]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 14: 100% 1/1 [00:26<00:00, 26.87s/it, v_num=r4bl, train_loss_step=3.110, train_loss_epoch=4.580]
Epoch 15: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.110, train_loss_epoch=3.110]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 15: 100% 1/1 [00:04<00:00, 4.72s/it, v_num=r4bl, train_loss_step=2.800, train_loss_epoch=3.110]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 15: 100% 1/1 [00:24<00:00, 24.37s/it, v_num=r4bl, train_loss_step=2.800, train_loss_epoch=3.110]
Epoch 16: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.800, train_loss_epoch=2.800]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 16: 100% 1/1 [00:04<00:00, 4.66s/it, v_num=r4bl, train_loss_step=3.250, train_loss_epoch=2.800]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 16: 100% 1/1 [00:25<00:00, 25.88s/it, v_num=r4bl, train_loss_step=3.250, train_loss_epoch=2.800]
Epoch 17: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.250, train_loss_epoch=3.250]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 17: 100% 1/1 [00:06<00:00, 6.35s/it, v_num=r4bl, train_loss_step=2.830, train_loss_epoch=3.250]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 17: 100% 1/1 [00:27<00:00, 27.46s/it, v_num=r4bl, train_loss_step=2.830, train_loss_epoch=3.250]
Epoch 18: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.830, train_loss_epoch=2.830]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 18: 100% 1/1 [00:04<00:00, 4.77s/it, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.830]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 18: 100% 1/1 [00:24<00:00, 24.30s/it, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.830]
Epoch 19: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.890]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 19: 100% 1/1 [00:04<00:00, 4.48s/it, v_num=r4bl, train_loss_step=2.970, train_loss_epoch=2.890]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 19: 100% 1/1 [00:25<00:00, 25.78s/it, v_num=r4bl, train_loss_step=2.970, train_loss_epoch=2.890]
Epoch 20: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.970, train_loss_epoch=2.970]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 20: 100% 1/1 [00:05<00:00, 5.94s/it, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.970]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 20: 100% 1/1 [00:27<00:00, 27.01s/it, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.970]
Epoch 21: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.610]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 21: 100% 1/1 [00:05<00:00, 5.02s/it, v_num=r4bl, train_loss_step=3.370, train_loss_epoch=2.610]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 21: 100% 1/1 [00:24<00:00, 24.42s/it, v_num=r4bl, train_loss_step=3.370, train_loss_epoch=2.610]
Epoch 22: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.370, train_loss_epoch=3.370]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 22: 100% 1/1 [00:04<00:00, 4.73s/it, v_num=r4bl, train_loss_step=2.590, train_loss_epoch=3.370]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 22: 100% 1/1 [00:25<00:00, 25.74s/it, v_num=r4bl, train_loss_step=2.590, train_loss_epoch=3.370]
Epoch 23: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.590, train_loss_epoch=2.590]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 23: 100% 1/1 [00:06<00:00, 6.29s/it, v_num=r4bl, train_loss_step=3.270, train_loss_epoch=2.590]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 23: 100% 1/1 [00:27<00:00, 27.06s/it, v_num=r4bl, train_loss_step=3.270, train_loss_epoch=2.590]
Epoch 24: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.270, train_loss_epoch=3.270]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 24: 100% 1/1 [00:04<00:00, 5.00s/it, v_num=r4bl, train_loss_step=2.880, train_loss_epoch=3.270]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 24: 100% 1/1 [00:24<00:00, 24.82s/it, v_num=r4bl, train_loss_step=2.880, train_loss_epoch=3.270]
Epoch 25: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.880, train_loss_epoch=2.880]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 25: 100% 1/1 [00:04<00:00, 4.62s/it, v_num=r4bl, train_loss_step=3.900, train_loss_epoch=2.880]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 25: 100% 1/1 [00:26<00:00, 26.00s/it, v_num=r4bl, train_loss_step=3.900, train_loss_epoch=2.880]
Epoch 26: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.900, train_loss_epoch=3.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 26: 100% 1/1 [00:06<00:00, 6.18s/it, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=3.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 26: 100% 1/1 [00:27<00:00, 27.07s/it, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=3.900]
Epoch 27: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=2.430]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 27: 100% 1/1 [00:05<00:00, 5.09s/it, v_num=r4bl, train_loss_step=3.090, train_loss_epoch=2.430]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 27: 100% 1/1 [00:24<00:00, 24.32s/it, v_num=r4bl, train_loss_step=3.090, train_loss_epoch=2.430]
Epoch 28: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.090, train_loss_epoch=3.090]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 28: 100% 1/1 [00:04<00:00, 4.49s/it, v_num=r4bl, train_loss_step=2.220, train_loss_epoch=3.090]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 28: 100% 1/1 [00:25<00:00, 25.48s/it, v_num=r4bl, train_loss_step=2.220, train_loss_epoch=3.090]
Epoch 29: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.220, train_loss_epoch=2.220]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 29: 100% 1/1 [00:05<00:00, 5.95s/it, v_num=r4bl, train_loss_step=3.130, train_loss_epoch=2.220]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 29: 100% 1/1 [00:26<00:00, 26.77s/it, v_num=r4bl, train_loss_step=3.130, train_loss_epoch=2.220]
Epoch 30: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.130, train_loss_epoch=3.130]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 30: 100% 1/1 [00:04<00:00, 4.69s/it, v_num=r4bl, train_loss_step=3.050, train_loss_epoch=3.130]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 30: 100% 1/1 [00:23<00:00, 23.99s/it, v_num=r4bl, train_loss_step=3.050, train_loss_epoch=3.130]
Epoch 31: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.050, train_loss_epoch=3.050]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 31: 100% 1/1 [00:04<00:00, 4.69s/it, v_num=r4bl, train_loss_step=2.990, train_loss_epoch=3.050]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 31: 100% 1/1 [00:25<00:00, 25.73s/it, v_num=r4bl, train_loss_step=2.990, train_loss_epoch=3.050]
Epoch 32: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.990, train_loss_epoch=2.990]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 32: 100% 1/1 [00:06<00:00, 6.38s/it, v_num=r4bl, train_loss_step=2.920, train_loss_epoch=2.990]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 32: 100% 1/1 [00:27<00:00, 27.26s/it, v_num=r4bl, train_loss_step=2.920, train_loss_epoch=2.990]
Epoch 33: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.920, train_loss_epoch=2.920]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 33: 100% 1/1 [00:05<00:00, 5.01s/it, v_num=r4bl, train_loss_step=2.490, train_loss_epoch=2.920]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 33: 100% 1/1 [00:24<00:00, 24.59s/it, v_num=r4bl, train_loss_step=2.490, train_loss_epoch=2.920]
Epoch 34: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.490, train_loss_epoch=2.490]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 34: 100% 1/1 [00:04<00:00, 4.68s/it, v_num=r4bl, train_loss_step=2.940, train_loss_epoch=2.490]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 34: 100% 1/1 [00:25<00:00, 25.65s/it, v_num=r4bl, train_loss_step=2.940, train_loss_epoch=2.490]
Epoch 35: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.940, train_loss_epoch=2.940]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 35: 100% 1/1 [00:05<00:00, 5.96s/it, v_num=r4bl, train_loss_step=2.860, train_loss_epoch=2.940]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 35: 100% 1/1 [00:27<00:00, 27.07s/it, v_num=r4bl, train_loss_step=2.860, train_loss_epoch=2.940]
Epoch 36: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.860, train_loss_epoch=2.860]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 36: 100% 1/1 [00:05<00:00, 5.53s/it, v_num=r4bl, train_loss_step=2.450, train_loss_epoch=2.860]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 36: 100% 1/1 [00:24<00:00, 24.73s/it, v_num=r4bl, train_loss_step=2.450, train_loss_epoch=2.860]
Epoch 37: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.450, train_loss_epoch=2.450]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 37: 100% 1/1 [00:04<00:00, 4.59s/it, v_num=r4bl, train_loss_step=2.480, train_loss_epoch=2.450]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 37: 100% 1/1 [00:26<00:00, 26.11s/it, v_num=r4bl, train_loss_step=2.480, train_loss_epoch=2.450]
Epoch 38: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.480, train_loss_epoch=2.480]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 38: 100% 1/1 [00:06<00:00, 6.28s/it, v_num=r4bl, train_loss_step=2.400, train_loss_epoch=2.480]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 38: 100% 1/1 [00:27<00:00, 27.46s/it, v_num=r4bl, train_loss_step=2.400, train_loss_epoch=2.480]
Epoch 39: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.400, train_loss_epoch=2.400]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 39: 100% 1/1 [00:05<00:00, 5.15s/it, v_num=r4bl, train_loss_step=2.790, train_loss_epoch=2.400]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 39: 100% 1/1 [00:28<00:00, 28.84s/it, v_num=r4bl, train_loss_step=2.790, train_loss_epoch=2.400]
Epoch 40: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.790, train_loss_epoch=2.790]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 40: 100% 1/1 [00:04<00:00, 4.62s/it, v_num=r4bl, train_loss_step=2.470, train_loss_epoch=2.790]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 40: 100% 1/1 [00:27<00:00, 27.16s/it, v_num=r4bl, train_loss_step=2.470, train_loss_epoch=2.790]
Epoch 41: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.470, train_loss_epoch=2.470]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 41: 100% 1/1 [00:05<00:00, 5.71s/it, v_num=r4bl, train_loss_step=2.660, train_loss_epoch=2.470]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 41: 100% 1/1 [00:25<00:00, 25.37s/it, v_num=r4bl, train_loss_step=2.660, train_loss_epoch=2.470]
Epoch 42: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.660, train_loss_epoch=2.660]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 42: 100% 1/1 [00:04<00:00, 4.59s/it, v_num=r4bl, train_loss_step=2.530, train_loss_epoch=2.660]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 42: 100% 1/1 [00:24<00:00, 24.83s/it, v_num=r4bl, train_loss_step=2.530, train_loss_epoch=2.660]
Epoch 43: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.530, train_loss_epoch=2.530]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 43: 100% 1/1 [00:05<00:00, 5.33s/it, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.530]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 43: 100% 1/1 [00:28<00:00, 28.12s/it, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.530]
Epoch 44: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.190]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 44: 100% 1/1 [00:05<00:00, 5.82s/it, v_num=r4bl, train_loss_step=2.900, train_loss_epoch=2.190]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 44: 100% 1/1 [00:25<00:00, 25.12s/it, v_num=r4bl, train_loss_step=2.900, train_loss_epoch=2.190]
Epoch 45: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.900, train_loss_epoch=2.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 45: 100% 1/1 [00:04<00:00, 4.78s/it, v_num=r4bl, train_loss_step=3.060, train_loss_epoch=2.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 45: 100% 1/1 [00:25<00:00, 25.19s/it, v_num=r4bl, train_loss_step=3.060, train_loss_epoch=2.900]
Epoch 46: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=3.060, train_loss_epoch=3.060]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 46: 100% 1/1 [00:04<00:00, 4.91s/it, v_num=r4bl, train_loss_step=2.420, train_loss_epoch=3.060]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 46: 100% 1/1 [00:27<00:00, 27.99s/it, v_num=r4bl, train_loss_step=2.420, train_loss_epoch=3.060]
Epoch 47: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.420, train_loss_epoch=2.420]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 47: 100% 1/1 [00:05<00:00, 5.90s/it, v_num=r4bl, train_loss_step=2.360, train_loss_epoch=2.420]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 47: 100% 1/1 [00:25<00:00, 25.44s/it, v_num=r4bl, train_loss_step=2.360, train_loss_epoch=2.420]
Epoch 48: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.360, train_loss_epoch=2.360]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 48: 100% 1/1 [00:04<00:00, 4.69s/it, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=2.360]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 48: 100% 1/1 [00:25<00:00, 25.07s/it, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=2.360]
Epoch 49: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.430, train_loss_epoch=2.430]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 49: 100% 1/1 [00:05<00:00, 5.15s/it, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.430]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 49: 100% 1/1 [00:27<00:00, 27.50s/it, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.430]
Epoch 50: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.890, train_loss_epoch=2.890]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 50: 100% 1/1 [00:05<00:00, 5.82s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.890]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 50: 100% 1/1 [00:25<00:00, 25.47s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.890]
Epoch 51: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.300]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 51: 100% 1/1 [00:04<00:00, 4.79s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.300]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 51: 100% 1/1 [00:25<00:00, 25.20s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.300]
Epoch 52: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 52: 100% 1/1 [00:05<00:00, 5.22s/it, v_num=r4bl, train_loss_step=2.560, train_loss_epoch=2.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 52: 100% 1/1 [00:27<00:00, 27.82s/it, v_num=r4bl, train_loss_step=2.560, train_loss_epoch=2.080]
Epoch 53: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.560, train_loss_epoch=2.560]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 53: 100% 1/1 [00:05<00:00, 5.60s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.560]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 53: 100% 1/1 [00:24<00:00, 24.87s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.560]
Epoch 54: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 54: 100% 1/1 [00:04<00:00, 4.63s/it, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 54: 100% 1/1 [00:25<00:00, 25.09s/it, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.080]
Epoch 55: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.230]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 55: 100% 1/1 [00:05<00:00, 5.18s/it, v_num=r4bl, train_loss_step=2.500, train_loss_epoch=2.230]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 55: 100% 1/1 [00:27<00:00, 27.53s/it, v_num=r4bl, train_loss_step=2.500, train_loss_epoch=2.230]
Epoch 56: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.500, train_loss_epoch=2.500]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 56: 100% 1/1 [00:05<00:00, 5.78s/it, v_num=r4bl, train_loss_step=2.200, train_loss_epoch=2.500]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 56: 100% 1/1 [00:25<00:00, 25.03s/it, v_num=r4bl, train_loss_step=2.200, train_loss_epoch=2.500]
Epoch 57: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.200, train_loss_epoch=2.200]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 57: 100% 1/1 [00:04<00:00, 4.68s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.200]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 57: 100% 1/1 [00:24<00:00, 24.51s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.200]
Epoch 58: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.300]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 58: 100% 1/1 [00:04<00:00, 4.63s/it, v_num=r4bl, train_loss_step=2.650, train_loss_epoch=2.300]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 58: 100% 1/1 [00:27<00:00, 27.57s/it, v_num=r4bl, train_loss_step=2.650, train_loss_epoch=2.300]
Epoch 59: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.650, train_loss_epoch=2.650]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 59: 100% 1/1 [00:06<00:00, 6.49s/it, v_num=r4bl, train_loss_step=2.620, train_loss_epoch=2.650]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 59: 100% 1/1 [00:26<00:00, 26.70s/it, v_num=r4bl, train_loss_step=2.620, train_loss_epoch=2.650]
Epoch 60: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.620, train_loss_epoch=2.620]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 60: 100% 1/1 [00:04<00:00, 4.63s/it, v_num=r4bl, train_loss_step=2.730, train_loss_epoch=2.620]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 60: 100% 1/1 [00:23<00:00, 23.94s/it, v_num=r4bl, train_loss_step=2.730, train_loss_epoch=2.620]
Epoch 61: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.730, train_loss_epoch=2.730]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 61: 100% 1/1 [00:04<00:00, 4.60s/it, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=2.730]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 61: 100% 1/1 [00:25<00:00, 25.68s/it, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=2.730]
Epoch 62: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=2.350]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 62: 100% 1/1 [00:06<00:00, 6.28s/it, v_num=r4bl, train_loss_step=2.090, train_loss_epoch=2.350]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 62: 100% 1/1 [00:27<00:00, 27.54s/it, v_num=r4bl, train_loss_step=2.090, train_loss_epoch=2.350]
Epoch 63: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.090, train_loss_epoch=2.090]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 63: 100% 1/1 [00:05<00:00, 5.13s/it, v_num=r4bl, train_loss_step=2.600, train_loss_epoch=2.090]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 63: 100% 1/1 [00:24<00:00, 24.78s/it, v_num=r4bl, train_loss_step=2.600, train_loss_epoch=2.090]
Epoch 64: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.600, train_loss_epoch=2.600]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 64: 100% 1/1 [00:04<00:00, 4.66s/it, v_num=r4bl, train_loss_step=2.030, train_loss_epoch=2.600]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 64: 100% 1/1 [00:25<00:00, 25.59s/it, v_num=r4bl, train_loss_step=2.030, train_loss_epoch=2.600]
Epoch 65: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.030, train_loss_epoch=2.030]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 65: 100% 1/1 [00:06<00:00, 6.25s/it, v_num=r4bl, train_loss_step=2.720, train_loss_epoch=2.030]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 65: 100% 1/1 [00:27<00:00, 27.55s/it, v_num=r4bl, train_loss_step=2.720, train_loss_epoch=2.030]
Epoch 66: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.720, train_loss_epoch=2.720]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 66: 100% 1/1 [00:05<00:00, 5.28s/it, v_num=r4bl, train_loss_step=2.270, train_loss_epoch=2.720]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 66: 100% 1/1 [00:24<00:00, 24.73s/it, v_num=r4bl, train_loss_step=2.270, train_loss_epoch=2.720]
Epoch 67: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.270, train_loss_epoch=2.270]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 67: 100% 1/1 [00:04<00:00, 4.57s/it, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.270]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 67: 100% 1/1 [00:25<00:00, 25.66s/it, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.270]
Epoch 68: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.190, train_loss_epoch=2.190]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 68: 100% 1/1 [00:05<00:00, 5.57s/it, v_num=r4bl, train_loss_step=1.950, train_loss_epoch=2.190]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 68: 100% 1/1 [00:26<00:00, 26.40s/it, v_num=r4bl, train_loss_step=1.950, train_loss_epoch=2.190]
Epoch 69: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.950, train_loss_epoch=1.950]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 69: 100% 1/1 [00:05<00:00, 5.40s/it, v_num=r4bl, train_loss_step=2.320, train_loss_epoch=1.950]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 69: 100% 1/1 [00:24<00:00, 24.86s/it, v_num=r4bl, train_loss_step=2.320, train_loss_epoch=1.950]
Epoch 70: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.320, train_loss_epoch=2.320]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 70: 100% 1/1 [00:04<00:00, 4.73s/it, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.320]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 70: 100% 1/1 [00:25<00:00, 25.60s/it, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.320]
Epoch 71: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.230, train_loss_epoch=2.230]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 71: 100% 1/1 [00:06<00:00, 6.19s/it, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.230]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 71: 100% 1/1 [00:27<00:00, 27.77s/it, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.230]
Epoch 72: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.610, train_loss_epoch=2.610]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 72: 100% 1/1 [00:05<00:00, 5.26s/it, v_num=r4bl, train_loss_step=2.340, train_loss_epoch=2.610]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 72: 100% 1/1 [00:24<00:00, 24.94s/it, v_num=r4bl, train_loss_step=2.340, train_loss_epoch=2.610]
Epoch 73: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.340, train_loss_epoch=2.340]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 73: 100% 1/1 [00:04<00:00, 4.67s/it, v_num=r4bl, train_loss_step=2.160, train_loss_epoch=2.340]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 73: 100% 1/1 [00:25<00:00, 25.72s/it, v_num=r4bl, train_loss_step=2.160, train_loss_epoch=2.340]
Epoch 74: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.160, train_loss_epoch=2.160]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 74: 100% 1/1 [00:05<00:00, 5.62s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.160]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 74: 100% 1/1 [00:26<00:00, 26.82s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.160]
Epoch 75: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.110]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 75: 100% 1/1 [00:05<00:00, 5.20s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=2.110]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 75: 100% 1/1 [00:24<00:00, 24.66s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=2.110]
Epoch 76: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=1.990]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 76: 100% 1/1 [00:04<00:00, 4.73s/it, v_num=r4bl, train_loss_step=2.250, train_loss_epoch=1.990]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 76: 100% 1/1 [00:25<00:00, 25.55s/it, v_num=r4bl, train_loss_step=2.250, train_loss_epoch=1.990]
Epoch 77: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.250, train_loss_epoch=2.250]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 77: 100% 1/1 [00:05<00:00, 5.77s/it, v_num=r4bl, train_loss_step=1.700, train_loss_epoch=2.250]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 77: 100% 1/1 [00:26<00:00, 26.84s/it, v_num=r4bl, train_loss_step=1.700, train_loss_epoch=2.250]
Epoch 78: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.700, train_loss_epoch=1.700]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 78: 100% 1/1 [00:05<00:00, 5.32s/it, v_num=r4bl, train_loss_step=2.290, train_loss_epoch=1.700]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 78: 100% 1/1 [00:24<00:00, 24.71s/it, v_num=r4bl, train_loss_step=2.290, train_loss_epoch=1.700]
Epoch 79: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.290, train_loss_epoch=2.290]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 79: 100% 1/1 [00:04<00:00, 4.69s/it, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.290]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 79: 100% 1/1 [00:25<00:00, 25.53s/it, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.290]
Epoch 80: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.060]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 80: 100% 1/1 [00:05<00:00, 5.58s/it, v_num=r4bl, train_loss_step=2.140, train_loss_epoch=2.060]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 80: 100% 1/1 [00:26<00:00, 26.97s/it, v_num=r4bl, train_loss_step=2.140, train_loss_epoch=2.060]
Epoch 81: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.140, train_loss_epoch=2.140]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 81: 100% 1/1 [00:05<00:00, 5.03s/it, v_num=r4bl, train_loss_step=2.310, train_loss_epoch=2.140]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 81: 100% 1/1 [00:24<00:00, 24.37s/it, v_num=r4bl, train_loss_step=2.310, train_loss_epoch=2.140]
Epoch 82: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.310, train_loss_epoch=2.310]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 82: 100% 1/1 [00:04<00:00, 4.74s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.310]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 82: 100% 1/1 [00:25<00:00, 25.56s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.310]
Epoch 83: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.110]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 83: 100% 1/1 [00:05<00:00, 5.59s/it, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.110]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 83: 100% 1/1 [00:26<00:00, 26.76s/it, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.110]
Epoch 84: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.060, train_loss_epoch=2.060]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 84: 100% 1/1 [00:04<00:00, 4.91s/it, v_num=r4bl, train_loss_step=2.050, train_loss_epoch=2.060]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 84: 100% 1/1 [00:24<00:00, 24.60s/it, v_num=r4bl, train_loss_step=2.050, train_loss_epoch=2.060]
Epoch 85: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.050, train_loss_epoch=2.050]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 85: 100% 1/1 [00:04<00:00, 4.59s/it, v_num=r4bl, train_loss_step=1.770, train_loss_epoch=2.050]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 85: 100% 1/1 [00:25<00:00, 25.92s/it, v_num=r4bl, train_loss_step=1.770, train_loss_epoch=2.050]
Epoch 86: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.770, train_loss_epoch=1.770]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 86: 100% 1/1 [00:06<00:00, 6.10s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=1.770]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 86: 100% 1/1 [00:27<00:00, 27.23s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=1.770]
Epoch 87: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 87: 100% 1/1 [00:05<00:00, 5.32s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 87: 100% 1/1 [00:24<00:00, 24.72s/it, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.080]
Epoch 88: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.110, train_loss_epoch=2.110]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 88: 100% 1/1 [00:04<00:00, 4.57s/it, v_num=r4bl, train_loss_step=2.150, train_loss_epoch=2.110]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 88: 100% 1/1 [00:25<00:00, 25.08s/it, v_num=r4bl, train_loss_step=2.150, train_loss_epoch=2.110]
Epoch 89: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.150, train_loss_epoch=2.150]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 89: 100% 1/1 [00:05<00:00, 5.30s/it, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=2.150]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 89: 100% 1/1 [00:28<00:00, 28.29s/it, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=2.150]
Epoch 90: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=1.920]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 90: 100% 1/1 [00:05<00:00, 5.94s/it, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=1.920]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 90: 100% 1/1 [00:25<00:00, 25.30s/it, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=1.920]
Epoch 91: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.350, train_loss_epoch=2.350]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 91: 100% 1/1 [00:04<00:00, 4.58s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.350]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 91: 100% 1/1 [00:24<00:00, 24.76s/it, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.350]
Epoch 92: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.080, train_loss_epoch=2.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 92: 100% 1/1 [00:05<00:00, 5.04s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=2.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 92: 100% 1/1 [00:27<00:00, 27.52s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=2.080]
Epoch 93: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=1.990]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 93: 100% 1/1 [00:05<00:00, 5.94s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=1.990]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 93: 100% 1/1 [00:25<00:00, 25.64s/it, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=1.990]
Epoch 94: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.300, train_loss_epoch=2.300]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 94: 100% 1/1 [00:04<00:00, 4.59s/it, v_num=r4bl, train_loss_step=1.850, train_loss_epoch=2.300]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 94: 100% 1/1 [00:24<00:00, 24.65s/it, v_num=r4bl, train_loss_step=1.850, train_loss_epoch=2.300]
Epoch 95: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.850, train_loss_epoch=1.850]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 95: 100% 1/1 [00:04<00:00, 4.98s/it, v_num=r4bl, train_loss_step=2.390, train_loss_epoch=1.850]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 95: 100% 1/1 [00:27<00:00, 27.50s/it, v_num=r4bl, train_loss_step=2.390, train_loss_epoch=1.850]
Epoch 96: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=2.390, train_loss_epoch=2.390]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 96: 100% 1/1 [00:05<00:00, 5.79s/it, v_num=r4bl, train_loss_step=1.980, train_loss_epoch=2.390]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 96: 100% 1/1 [00:25<00:00, 25.52s/it, v_num=r4bl, train_loss_step=1.980, train_loss_epoch=2.390]
Epoch 97: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.980, train_loss_epoch=1.980]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 97: 100% 1/1 [00:04<00:00, 4.83s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=1.980]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 97: 100% 1/1 [00:25<00:00, 25.29s/it, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=1.980]
Epoch 98: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.990, train_loss_epoch=1.990]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 98: 100% 1/1 [00:04<00:00, 4.95s/it, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=1.990]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 98: 100% 1/1 [00:28<00:00, 28.05s/it, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=1.990]
Epoch 99: 0% 0/1 [00:00<?, ?it/s, v_num=r4bl, train_loss_step=1.920, train_loss_epoch=1.920]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 99: 100% 1/1 [00:05<00:00, 5.82s/it, v_num=r4bl, train_loss_step=1.800, train_loss_epoch=1.920]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 99: 100% 1/1 [00:25<00:00, 25.32s/it, v_num=r4bl, train_loss_step=1.800, train_loss_epoch=1.920]
Epoch 99: 100% 1/1 [00:25<00:00, 25.45s/it, v_num=r4bl, train_loss_step=1.800, train_loss_epoch=1.800]`Trainer.fit` stopped: `max_epochs=100` reached.
Epoch 99: 100% 1/1 [00:25<00:00, 25.50s/it, v_num=r4bl, train_loss_step=1.800, train_loss_epoch=1.800]
wandb: 🚀 View run graphcast-4x64-11_19_11-2215 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/dk8rr4bl
# Deterministic graph-based forecasting model that uses a hierarchical mesh graph and performs sequential message passing through the hierarchy during processing
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graph_fm --graph hierarchical
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 71155 nodes (63784 grid, 7371 mesh)
Loaded hierarchical graph with structure:
level 0 - 6561 nodes, 51520 same-level edges
0<->1
- 6561 up edges, 6561 down edges
level 1 - 729 nodes, 5512 same-level edges
1<->2
- 729 up edges, 729 down edges
level 2 - 81 nodes, 544 same-level edges
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: eugeneliuhaokun (eugeneliuhaokun-university-of-lausanne). Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_124040-ov9g4qd1
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run graph_fm-4x64-11_19_12-0257
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/ov9g4qd1
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
---------------------------------------------------------
0 | m2m_edge_index | BufferList | 0
1 | mesh_up_edge_index | BufferList | 0
2 | mesh_down_edge_index | BufferList | 0
3 | m2m_features | BufferList | 0
4 | mesh_up_features | BufferList | 0
5 | mesh_down_features | BufferList | 0
6 | mesh_static_features | BufferList | 0
7 | grid_embedder | Sequential | 7.8 K
8 | g2m_embedder | Sequential | 4.5 K
9 | m2g_embedder | Sequential | 4.5 K
10 | g2m_gnn | InteractionNet | 29.2 K
11 | encoding_grid_mlp | Sequential | 8.4 K
12 | m2g_gnn | InteractionNet | 29.2 K
13 | output_map | Sequential | 5.3 K
14 | mesh_embedders | ModuleList | 13.4 K
15 | mesh_same_embedders | ModuleList | 13.6 K
16 | mesh_up_embedders | ModuleList | 9.1 K
17 | mesh_down_embedders | ModuleList | 9.1 K
18 | mesh_init_gnns | ModuleList | 58.4 K
19 | mesh_read_gnns | ModuleList | 58.4 K
20 | mesh_down_gnns | ModuleList | 233 K
21 | mesh_down_same_gnns | ModuleList | 350 K
22 | mesh_up_gnns | ModuleList | 233 K
23 | mesh_up_same_gnns | ModuleList | 350 K
---------------------------------------------------------
1.4 M Trainable params
0 Non-trainable params
1.4 M Total params
5.673 Total estimated model params size (MB)
Sanity Checking: 0it [00:00, ?it/s]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 0: 100% 1/1 [00:05<00:00, 5.35s/it, v_num=4qd1, train_loss_step=5.440]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 0: 100% 1/1 [00:33<00:00, 33.91s/it, v_num=4qd1, train_loss_step=5.440]
Epoch 1: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=5.440, train_loss_epoch=5.440]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 1: 100% 1/1 [00:05<00:00, 5.16s/it, v_num=4qd1, train_loss_step=3.640, train_loss_epoch=5.440]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 1: 100% 1/1 [00:32<00:00, 32.68s/it, v_num=4qd1, train_loss_step=3.640, train_loss_epoch=5.440]
Epoch 2: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.640, train_loss_epoch=3.640]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 2: 100% 1/1 [00:06<00:00, 6.72s/it, v_num=4qd1, train_loss_step=3.980, train_loss_epoch=3.640]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 2: 100% 1/1 [00:34<00:00, 34.56s/it, v_num=4qd1, train_loss_step=3.980, train_loss_epoch=3.640]
Epoch 3: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.980, train_loss_epoch=3.980]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 3: 100% 1/1 [00:10<00:00, 10.04s/it, v_num=4qd1, train_loss_step=5.370, train_loss_epoch=3.980]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 3: 100% 1/1 [00:35<00:00, 35.85s/it, v_num=4qd1, train_loss_step=5.370, train_loss_epoch=3.980]
Epoch 4: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=5.370, train_loss_epoch=5.370]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 4: 100% 1/1 [00:05<00:00, 6.00s/it, v_num=4qd1, train_loss_step=5.380, train_loss_epoch=5.370]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 4: 100% 1/1 [00:36<00:00, 36.53s/it, v_num=4qd1, train_loss_step=5.380, train_loss_epoch=5.370]
Epoch 5: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=5.380, train_loss_epoch=5.380]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 5: 100% 1/1 [00:05<00:00, 5.44s/it, v_num=4qd1, train_loss_step=4.000, train_loss_epoch=5.380]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 5: 100% 1/1 [00:34<00:00, 34.65s/it, v_num=4qd1, train_loss_step=4.000, train_loss_epoch=5.380]
Epoch 6: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.000, train_loss_epoch=4.000]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 6: 100% 1/1 [00:05<00:00, 5.24s/it, v_num=4qd1, train_loss_step=4.740, train_loss_epoch=4.000]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 6: 100% 1/1 [00:32<00:00, 32.73s/it, v_num=4qd1, train_loss_step=4.740, train_loss_epoch=4.000]
Epoch 7: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.740, train_loss_epoch=4.740]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 7: 100% 1/1 [00:06<00:00, 6.46s/it, v_num=4qd1, train_loss_step=4.250, train_loss_epoch=4.740]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 7: 100% 1/1 [00:33<00:00, 33.27s/it, v_num=4qd1, train_loss_step=4.250, train_loss_epoch=4.740]
Epoch 8: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.250, train_loss_epoch=4.250]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 8: 100% 1/1 [00:07<00:00, 7.12s/it, v_num=4qd1, train_loss_step=3.350, train_loss_epoch=4.250]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 8: 100% 1/1 [00:34<00:00, 34.76s/it, v_num=4qd1, train_loss_step=3.350, train_loss_epoch=4.250]
Epoch 9: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.350, train_loss_epoch=3.350]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 9: 100% 1/1 [00:05<00:00, 5.20s/it, v_num=4qd1, train_loss_step=4.360, train_loss_epoch=3.350]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 9: 100% 1/1 [00:35<00:00, 35.30s/it, v_num=4qd1, train_loss_step=4.360, train_loss_epoch=3.350]
Epoch 10: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.360, train_loss_epoch=4.360]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 10: 100% 1/1 [00:05<00:00, 5.16s/it, v_num=4qd1, train_loss_step=3.180, train_loss_epoch=4.360]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 10: 100% 1/1 [00:33<00:00, 33.07s/it, v_num=4qd1, train_loss_step=3.180, train_loss_epoch=4.360]
Epoch 11: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.180, train_loss_epoch=3.180]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 11: 100% 1/1 [00:06<00:00, 6.24s/it, v_num=4qd1, train_loss_step=4.730, train_loss_epoch=3.180]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 11: 100% 1/1 [00:32<00:00, 32.63s/it, v_num=4qd1, train_loss_step=4.730, train_loss_epoch=3.180]
Epoch 12: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.730, train_loss_epoch=4.730]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 12: 100% 1/1 [00:07<00:00, 7.07s/it, v_num=4qd1, train_loss_step=3.710, train_loss_epoch=4.730]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 12: 100% 1/1 [00:34<00:00, 34.68s/it, v_num=4qd1, train_loss_step=3.710, train_loss_epoch=4.730]
Epoch 13: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.710, train_loss_epoch=3.710]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 13: 100% 1/1 [00:05<00:00, 5.31s/it, v_num=4qd1, train_loss_step=4.530, train_loss_epoch=3.710]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 13: 100% 1/1 [00:35<00:00, 35.26s/it, v_num=4qd1, train_loss_step=4.530, train_loss_epoch=3.710]
Epoch 14: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=4.530, train_loss_epoch=4.530]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 14: 100% 1/1 [00:05<00:00, 5.36s/it, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=4.530]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 14: 100% 1/1 [00:33<00:00, 33.30s/it, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=4.530]
Epoch 15: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=3.020]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 15: 100% 1/1 [00:07<00:00, 7.20s/it, v_num=4qd1, train_loss_step=2.720, train_loss_epoch=3.020]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 15: 100% 1/1 [00:33<00:00, 33.58s/it, v_num=4qd1, train_loss_step=2.720, train_loss_epoch=3.020]
Epoch 16: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.720, train_loss_epoch=2.720]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 16: 100% 1/1 [00:07<00:00, 7.39s/it, v_num=4qd1, train_loss_step=3.130, train_loss_epoch=2.720]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 16: 100% 1/1 [00:34<00:00, 34.11s/it, v_num=4qd1, train_loss_step=3.130, train_loss_epoch=2.720]
Epoch 17: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.130, train_loss_epoch=3.130]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 17: 100% 1/1 [00:05<00:00, 5.56s/it, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=3.130]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 17: 100% 1/1 [00:34<00:00, 34.72s/it, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=3.130]
Epoch 18: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=2.750]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 18: 100% 1/1 [00:05<00:00, 5.22s/it, v_num=4qd1, train_loss_step=2.840, train_loss_epoch=2.750]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 18: 100% 1/1 [00:34<00:00, 34.06s/it, v_num=4qd1, train_loss_step=2.840, train_loss_epoch=2.750]
Epoch 19: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.840, train_loss_epoch=2.840]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 19: 100% 1/1 [00:06<00:00, 6.36s/it, v_num=4qd1, train_loss_step=2.930, train_loss_epoch=2.840]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 19: 100% 1/1 [00:33<00:00, 33.20s/it, v_num=4qd1, train_loss_step=2.930, train_loss_epoch=2.840]
Epoch 20: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.930, train_loss_epoch=2.930]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 20: 100% 1/1 [00:07<00:00, 7.23s/it, v_num=4qd1, train_loss_step=2.550, train_loss_epoch=2.930]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 20: 100% 1/1 [00:34<00:00, 34.16s/it, v_num=4qd1, train_loss_step=2.550, train_loss_epoch=2.930]
Epoch 21: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.550, train_loss_epoch=2.550]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 21: 100% 1/1 [00:05<00:00, 5.49s/it, v_num=4qd1, train_loss_step=3.290, train_loss_epoch=2.550]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 21: 100% 1/1 [00:34<00:00, 34.95s/it, v_num=4qd1, train_loss_step=3.290, train_loss_epoch=2.550]
Epoch 22: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.290, train_loss_epoch=3.290]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 22: 100% 1/1 [00:05<00:00, 5.23s/it, v_num=4qd1, train_loss_step=2.520, train_loss_epoch=3.290]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 22: 100% 1/1 [00:32<00:00, 32.69s/it, v_num=4qd1, train_loss_step=2.520, train_loss_epoch=3.290]
Epoch 23: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.520, train_loss_epoch=2.520]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 23: 100% 1/1 [00:07<00:00, 7.26s/it, v_num=4qd1, train_loss_step=3.210, train_loss_epoch=2.520]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 23: 100% 1/1 [00:33<00:00, 33.53s/it, v_num=4qd1, train_loss_step=3.210, train_loss_epoch=2.520]
Epoch 24: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.210, train_loss_epoch=3.210]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 24: 100% 1/1 [00:05<00:00, 5.25s/it, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=3.210]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 24: 100% 1/1 [00:35<00:00, 35.42s/it, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=3.210]
Epoch 25: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=2.810]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 25: 100% 1/1 [00:07<00:00, 7.01s/it, v_num=4qd1, train_loss_step=3.780, train_loss_epoch=2.810]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 25: 100% 1/1 [00:36<00:00, 36.36s/it, v_num=4qd1, train_loss_step=3.780, train_loss_epoch=2.810]
Epoch 26: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.780, train_loss_epoch=3.780]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 26: 100% 1/1 [00:05<00:00, 5.34s/it, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=3.780]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 26: 100% 1/1 [00:32<00:00, 32.13s/it, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=3.780]
Epoch 27: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=2.400]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 27: 100% 1/1 [00:07<00:00, 7.29s/it, v_num=4qd1, train_loss_step=3.030, train_loss_epoch=2.400]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 27: 100% 1/1 [00:34<00:00, 34.34s/it, v_num=4qd1, train_loss_step=3.030, train_loss_epoch=2.400]
Epoch 28: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.030, train_loss_epoch=3.030]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 28: 100% 1/1 [00:05<00:00, 5.89s/it, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=3.030]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 28: 100% 1/1 [00:35<00:00, 35.14s/it, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=3.030]
Epoch 29: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=2.250]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 29: 100% 1/1 [00:05<00:00, 5.26s/it, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=2.250]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 29: 100% 1/1 [00:33<00:00, 33.25s/it, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=2.250]
Epoch 30: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.020, train_loss_epoch=3.020]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 30: 100% 1/1 [00:06<00:00, 6.35s/it, v_num=4qd1, train_loss_step=3.080, train_loss_epoch=3.020]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 30: 100% 1/1 [00:32<00:00, 32.85s/it, v_num=4qd1, train_loss_step=3.080, train_loss_epoch=3.020]
Epoch 31: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.080, train_loss_epoch=3.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 31: 100% 1/1 [00:07<00:00, 7.17s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=3.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 31: 100% 1/1 [00:34<00:00, 34.67s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=3.080]
Epoch 32: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 32: 100% 1/1 [00:05<00:00, 5.33s/it, v_num=4qd1, train_loss_step=2.860, train_loss_epoch=2.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 32: 100% 1/1 [00:35<00:00, 35.38s/it, v_num=4qd1, train_loss_step=2.860, train_loss_epoch=2.900]
Epoch 33: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.860, train_loss_epoch=2.860]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 33: 100% 1/1 [00:05<00:00, 5.20s/it, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.860]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 33: 100% 1/1 [00:32<00:00, 32.27s/it, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.860]
Epoch 34: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.430]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 34: 100% 1/1 [00:07<00:00, 7.33s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.430]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 34: 100% 1/1 [00:33<00:00, 33.65s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.430]
Epoch 35: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 35: 100% 1/1 [00:05<00:00, 5.62s/it, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=2.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 35: 100% 1/1 [00:34<00:00, 34.74s/it, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=2.900]
Epoch 36: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.810, train_loss_epoch=2.810]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 36: 100% 1/1 [00:05<00:00, 5.34s/it, v_num=4qd1, train_loss_step=2.390, train_loss_epoch=2.810]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 36: 100% 1/1 [00:33<00:00, 33.39s/it, v_num=4qd1, train_loss_step=2.390, train_loss_epoch=2.810]
Epoch 37: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.390, train_loss_epoch=2.390]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 37: 100% 1/1 [00:06<00:00, 6.52s/it, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.390]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 37: 100% 1/1 [00:32<00:00, 32.93s/it, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.390]
Epoch 38: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.430, train_loss_epoch=2.430]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 38: 100% 1/1 [00:07<00:00, 7.11s/it, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=2.430]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 38: 100% 1/1 [00:34<00:00, 34.51s/it, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=2.430]
Epoch 39: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=2.380]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 39: 100% 1/1 [00:05<00:00, 5.30s/it, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=2.380]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 39: 100% 1/1 [00:34<00:00, 34.70s/it, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=2.380]
Epoch 40: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.750, train_loss_epoch=2.750]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 40: 100% 1/1 [00:05<00:00, 5.16s/it, v_num=4qd1, train_loss_step=2.440, train_loss_epoch=2.750]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 40: 100% 1/1 [00:32<00:00, 32.38s/it, v_num=4qd1, train_loss_step=2.440, train_loss_epoch=2.750]
Epoch 41: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.440, train_loss_epoch=2.440]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 41: 100% 1/1 [00:07<00:00, 7.12s/it, v_num=4qd1, train_loss_step=2.600, train_loss_epoch=2.440]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 41: 100% 1/1 [00:33<00:00, 33.69s/it, v_num=4qd1, train_loss_step=2.600, train_loss_epoch=2.440]
Epoch 42: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.600, train_loss_epoch=2.600]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 42: 100% 1/1 [00:05<00:00, 5.29s/it, v_num=4qd1, train_loss_step=2.480, train_loss_epoch=2.600]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 42: 100% 1/1 [00:34<00:00, 34.76s/it, v_num=4qd1, train_loss_step=2.480, train_loss_epoch=2.600]
Epoch 43: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.480, train_loss_epoch=2.480]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 43: 100% 1/1 [00:05<00:00, 5.22s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=2.480]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 43: 100% 1/1 [00:33<00:00, 33.52s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=2.480]
Epoch 44: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=2.240]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 44: 100% 1/1 [00:08<00:00, 8.64s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.240]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 44: 100% 1/1 [00:40<00:00, 40.28s/it, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.240]
Epoch 45: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.900, train_loss_epoch=2.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 45: 100% 1/1 [00:07<00:00, 7.74s/it, v_num=4qd1, train_loss_step=3.000, train_loss_epoch=2.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 45: 100% 1/1 [00:38<00:00, 38.13s/it, v_num=4qd1, train_loss_step=3.000, train_loss_epoch=2.900]
Epoch 46: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=3.000, train_loss_epoch=3.000]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 46: 100% 1/1 [00:08<00:00, 8.86s/it, v_num=4qd1, train_loss_step=2.420, train_loss_epoch=3.000]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 46: 100% 1/1 [00:35<00:00, 35.80s/it, v_num=4qd1, train_loss_step=2.420, train_loss_epoch=3.000]
Epoch 47: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.420, train_loss_epoch=2.420]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 47: 100% 1/1 [00:07<00:00, 7.69s/it, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=2.420]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 47: 100% 1/1 [00:35<00:00, 35.11s/it, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=2.420]
Epoch 48: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.400, train_loss_epoch=2.400]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 48: 100% 1/1 [00:05<00:00, 5.24s/it, v_num=4qd1, train_loss_step=2.460, train_loss_epoch=2.400]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 48: 100% 1/1 [00:34<00:00, 34.97s/it, v_num=4qd1, train_loss_step=2.460, train_loss_epoch=2.400]
Epoch 49: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.460, train_loss_epoch=2.460]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 49: 100% 1/1 [00:05<00:00, 5.29s/it, v_num=4qd1, train_loss_step=2.870, train_loss_epoch=2.460]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 49: 100% 1/1 [00:33<00:00, 33.33s/it, v_num=4qd1, train_loss_step=2.870, train_loss_epoch=2.460]
Epoch 50: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.870, train_loss_epoch=2.870]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 50: 100% 1/1 [00:07<00:00, 7.33s/it, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=2.870]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 50: 100% 1/1 [00:37<00:00, 37.64s/it, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=2.870]
Epoch 51: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.250, train_loss_epoch=2.250]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 51: 100% 1/1 [00:06<00:00, 6.42s/it, v_num=4qd1, train_loss_step=2.050, train_loss_epoch=2.250]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 51: 100% 1/1 [00:33<00:00, 33.14s/it, v_num=4qd1, train_loss_step=2.050, train_loss_epoch=2.250]
Epoch 52: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.050, train_loss_epoch=2.050]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 52: 100% 1/1 [00:07<00:00, 7.11s/it, v_num=4qd1, train_loss_step=2.570, train_loss_epoch=2.050]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 52: 100% 1/1 [00:34<00:00, 34.31s/it, v_num=4qd1, train_loss_step=2.570, train_loss_epoch=2.050]
Epoch 53: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.570, train_loss_epoch=2.570]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 53: 100% 1/1 [00:05<00:00, 5.23s/it, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.570]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 53: 100% 1/1 [00:35<00:00, 35.08s/it, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.570]
Epoch 54: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.070]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 54: 100% 1/1 [00:05<00:00, 5.25s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.070]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 54: 100% 1/1 [00:32<00:00, 32.83s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.070]
Epoch 55: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.210]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 55: 100% 1/1 [00:07<00:00, 7.43s/it, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=2.210]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 55: 100% 1/1 [00:33<00:00, 33.66s/it, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=2.210]
Epoch 56: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=2.470]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 56: 100% 1/1 [00:06<00:00, 6.36s/it, v_num=4qd1, train_loss_step=2.180, train_loss_epoch=2.470]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 56: 100% 1/1 [00:34<00:00, 34.62s/it, v_num=4qd1, train_loss_step=2.180, train_loss_epoch=2.470]
Epoch 57: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.180, train_loss_epoch=2.180]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 57: 100% 1/1 [00:06<00:00, 6.14s/it, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.180]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 57: 100% 1/1 [00:36<00:00, 36.28s/it, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.180]
Epoch 58: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.320]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 58: 100% 1/1 [00:06<00:00, 6.25s/it, v_num=4qd1, train_loss_step=2.640, train_loss_epoch=2.320]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 58: 100% 1/1 [00:38<00:00, 38.68s/it, v_num=4qd1, train_loss_step=2.640, train_loss_epoch=2.320]
Epoch 59: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.640, train_loss_epoch=2.640]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 59: 100% 1/1 [00:05<00:00, 5.23s/it, v_num=4qd1, train_loss_step=2.630, train_loss_epoch=2.640]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 59: 100% 1/1 [00:32<00:00, 32.27s/it, v_num=4qd1, train_loss_step=2.630, train_loss_epoch=2.640]
Epoch 60: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.630, train_loss_epoch=2.630]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 60: 100% 1/1 [00:07<00:00, 7.39s/it, v_num=4qd1, train_loss_step=2.710, train_loss_epoch=2.630]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 60: 100% 1/1 [00:34<00:00, 34.15s/it, v_num=4qd1, train_loss_step=2.710, train_loss_epoch=2.630]
Epoch 61: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.710, train_loss_epoch=2.710]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 61: 100% 1/1 [00:05<00:00, 5.78s/it, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.710]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 61: 100% 1/1 [00:34<00:00, 34.65s/it, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.710]
Epoch 62: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.320, train_loss_epoch=2.320]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 62: 100% 1/1 [00:05<00:00, 5.28s/it, v_num=4qd1, train_loss_step=2.080, train_loss_epoch=2.320]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 62: 100% 1/1 [00:33<00:00, 33.14s/it, v_num=4qd1, train_loss_step=2.080, train_loss_epoch=2.320]
Epoch 63: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.080, train_loss_epoch=2.080]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 63: 100% 1/1 [00:06<00:00, 6.07s/it, v_num=4qd1, train_loss_step=2.620, train_loss_epoch=2.080]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 63: 100% 1/1 [00:32<00:00, 32.84s/it, v_num=4qd1, train_loss_step=2.620, train_loss_epoch=2.080]
Epoch 64: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.620, train_loss_epoch=2.620]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 64: 100% 1/1 [00:07<00:00, 7.13s/it, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=2.620]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 64: 100% 1/1 [00:33<00:00, 33.88s/it, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=2.620]
Epoch 65: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=1.970]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 65: 100% 1/1 [00:05<00:00, 5.30s/it, v_num=4qd1, train_loss_step=2.650, train_loss_epoch=1.970]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 65: 100% 1/1 [00:34<00:00, 34.43s/it, v_num=4qd1, train_loss_step=2.650, train_loss_epoch=1.970]
Epoch 66: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.650, train_loss_epoch=2.650]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 66: 100% 1/1 [00:05<00:00, 5.22s/it, v_num=4qd1, train_loss_step=2.230, train_loss_epoch=2.650]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 66: 100% 1/1 [00:33<00:00, 33.32s/it, v_num=4qd1, train_loss_step=2.230, train_loss_epoch=2.650]
Epoch 67: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.230, train_loss_epoch=2.230]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 67: 100% 1/1 [00:07<00:00, 7.84s/it, v_num=4qd1, train_loss_step=2.200, train_loss_epoch=2.230]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 67: 100% 1/1 [00:35<00:00, 35.52s/it, v_num=4qd1, train_loss_step=2.200, train_loss_epoch=2.230]
Epoch 68: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.200, train_loss_epoch=2.200]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 68: 100% 1/1 [00:05<00:00, 5.95s/it, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=2.200]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 68: 100% 1/1 [00:36<00:00, 36.72s/it, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=2.200]
Epoch 69: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.970, train_loss_epoch=1.970]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 69: 100% 1/1 [00:07<00:00, 7.27s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=1.970]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 69: 100% 1/1 [00:35<00:00, 35.02s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=1.970]
Epoch 70: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=2.240]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 70: 100% 1/1 [00:06<00:00, 6.81s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.240]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 70: 100% 1/1 [00:37<00:00, 37.39s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.240]
Epoch 71: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.210]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 71: 100% 1/1 [00:06<00:00, 6.14s/it, v_num=4qd1, train_loss_step=2.490, train_loss_epoch=2.210]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 71: 100% 1/1 [00:34<00:00, 34.05s/it, v_num=4qd1, train_loss_step=2.490, train_loss_epoch=2.210]
Epoch 72: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.490, train_loss_epoch=2.490]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 72: 100% 1/1 [00:07<00:00, 7.21s/it, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.490]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 72: 100% 1/1 [00:34<00:00, 34.67s/it, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.490]
Epoch 73: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.280]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 73: 100% 1/1 [00:05<00:00, 5.35s/it, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.280]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 73: 100% 1/1 [00:35<00:00, 35.07s/it, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.280]
Epoch 74: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.140]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 74: 100% 1/1 [00:05<00:00, 5.21s/it, v_num=4qd1, train_loss_step=2.110, train_loss_epoch=2.140]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 74: 100% 1/1 [00:33<00:00, 33.38s/it, v_num=4qd1, train_loss_step=2.110, train_loss_epoch=2.140]
Epoch 75: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.110, train_loss_epoch=2.110]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 75: 100% 1/1 [00:06<00:00, 6.51s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.110]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 75: 100% 1/1 [00:32<00:00, 32.89s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.110]
Epoch 76: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=1.980]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 76: 100% 1/1 [00:07<00:00, 7.01s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=1.980]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 76: 100% 1/1 [00:34<00:00, 34.38s/it, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=1.980]
Epoch 77: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.210, train_loss_epoch=2.210]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 77: 100% 1/1 [00:05<00:00, 5.11s/it, v_num=4qd1, train_loss_step=1.650, train_loss_epoch=2.210]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 77: 100% 1/1 [00:34<00:00, 34.53s/it, v_num=4qd1, train_loss_step=1.650, train_loss_epoch=2.210]
Epoch 78: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.650, train_loss_epoch=1.650]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 78: 100% 1/1 [00:05<00:00, 5.39s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=1.650]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 78: 100% 1/1 [00:32<00:00, 32.21s/it, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=1.650]
Epoch 79: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.240, train_loss_epoch=2.240]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 79: 100% 1/1 [00:07<00:00, 7.29s/it, v_num=4qd1, train_loss_step=2.010, train_loss_epoch=2.240]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 79: 100% 1/1 [00:33<00:00, 33.49s/it, v_num=4qd1, train_loss_step=2.010, train_loss_epoch=2.240]
Epoch 80: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.010, train_loss_epoch=2.010]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 80: 100% 1/1 [00:05<00:00, 5.44s/it, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=2.010]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 80: 100% 1/1 [00:34<00:00, 34.36s/it, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=2.010]
Epoch 81: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=2.090]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 81: 100% 1/1 [00:05<00:00, 5.35s/it, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.090]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 81: 100% 1/1 [00:34<00:00, 34.02s/it, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.090]
Epoch 82: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.280, train_loss_epoch=2.280]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 82: 100% 1/1 [00:06<00:00, 6.41s/it, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.280]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 82: 100% 1/1 [00:32<00:00, 32.90s/it, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.280]
Epoch 83: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.070, train_loss_epoch=2.070]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 83: 100% 1/1 [00:07<00:00, 7.08s/it, v_num=4qd1, train_loss_step=2.030, train_loss_epoch=2.070]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 83: 100% 1/1 [00:33<00:00, 33.95s/it, v_num=4qd1, train_loss_step=2.030, train_loss_epoch=2.070]
Epoch 84: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.030, train_loss_epoch=2.030]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 84: 100% 1/1 [00:05<00:00, 5.42s/it, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.030]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 84: 100% 1/1 [00:35<00:00, 35.03s/it, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.030]
Epoch 85: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.060]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 85: 100% 1/1 [00:05<00:00, 5.26s/it, v_num=4qd1, train_loss_step=1.760, train_loss_epoch=2.060]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 85: 100% 1/1 [00:32<00:00, 32.47s/it, v_num=4qd1, train_loss_step=1.760, train_loss_epoch=2.060]
Epoch 86: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.760, train_loss_epoch=1.760]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 86: 100% 1/1 [00:07<00:00, 7.10s/it, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=1.760]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 86: 100% 1/1 [00:33<00:00, 33.24s/it, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=1.760]
Epoch 87: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.090, train_loss_epoch=2.090]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 87: 100% 1/1 [00:05<00:00, 5.31s/it, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.090]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 87: 100% 1/1 [00:34<00:00, 34.60s/it, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.090]
Epoch 88: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.060, train_loss_epoch=2.060]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 88: 100% 1/1 [00:05<00:00, 5.29s/it, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.060]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 88: 100% 1/1 [00:32<00:00, 32.88s/it, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.060]
Epoch 89: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.140, train_loss_epoch=2.140]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 89: 100% 1/1 [00:06<00:00, 6.54s/it, v_num=4qd1, train_loss_step=1.900, train_loss_epoch=2.140]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 89: 100% 1/1 [00:32<00:00, 32.82s/it, v_num=4qd1, train_loss_step=1.900, train_loss_epoch=2.140]
Epoch 90: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.900, train_loss_epoch=1.900]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 90: 100% 1/1 [00:06<00:00, 6.86s/it, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=1.900]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 90: 100% 1/1 [00:34<00:00, 34.58s/it, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=1.900]
Epoch 91: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.470, train_loss_epoch=2.470]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 91: 100% 1/1 [00:05<00:00, 5.12s/it, v_num=4qd1, train_loss_step=2.120, train_loss_epoch=2.470]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 91: 100% 1/1 [00:34<00:00, 34.37s/it, v_num=4qd1, train_loss_step=2.120, train_loss_epoch=2.470]
Epoch 92: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.120, train_loss_epoch=2.120]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 92: 100% 1/1 [00:05<00:00, 5.55s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.120]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 92: 100% 1/1 [00:32<00:00, 32.83s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.120]
Epoch 93: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=1.980]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 93: 100% 1/1 [00:07<00:00, 7.31s/it, v_num=4qd1, train_loss_step=2.330, train_loss_epoch=1.980]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 93: 100% 1/1 [00:33<00:00, 33.90s/it, v_num=4qd1, train_loss_step=2.330, train_loss_epoch=1.980]
Epoch 94: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.330, train_loss_epoch=2.330]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 94: 100% 1/1 [00:05<00:00, 5.33s/it, v_num=4qd1, train_loss_step=1.830, train_loss_epoch=2.330]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 94: 100% 1/1 [00:34<00:00, 34.84s/it, v_num=4qd1, train_loss_step=1.830, train_loss_epoch=2.330]
Epoch 95: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.830, train_loss_epoch=1.830]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 95: 100% 1/1 [00:05<00:00, 5.29s/it, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=1.830]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 95: 100% 1/1 [00:33<00:00, 33.03s/it, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=1.830]
Epoch 96: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.380, train_loss_epoch=2.380]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 96: 100% 1/1 [00:06<00:00, 6.44s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.380]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 96: 100% 1/1 [00:32<00:00, 32.76s/it, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=2.380]
Epoch 97: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.980, train_loss_epoch=1.980]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 97: 100% 1/1 [00:06<00:00, 6.75s/it, v_num=4qd1, train_loss_step=2.020, train_loss_epoch=1.980]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 97: 100% 1/1 [00:34<00:00, 34.15s/it, v_num=4qd1, train_loss_step=2.020, train_loss_epoch=1.980]
Epoch 98: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=2.020, train_loss_epoch=2.020]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 98: 100% 1/1 [00:05<00:00, 5.16s/it, v_num=4qd1, train_loss_step=1.940, train_loss_epoch=2.020]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 98: 100% 1/1 [00:34<00:00, 34.72s/it, v_num=4qd1, train_loss_step=1.940, train_loss_epoch=2.020]
Epoch 99: 0% 0/1 [00:00<?, ?it/s, v_num=4qd1, train_loss_step=1.940, train_loss_epoch=1.940]/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Epoch 99: 100% 1/1 [00:05<00:00, 5.29s/it, v_num=4qd1, train_loss_step=1.780, train_loss_epoch=1.940]
Validation: 0it [00:00, ?it/s]
Validation: 0% 0/1 [00:00<?, ?it/s]
Validation DataLoader 0: 0% 0/1 [00:00<?, ?it/s]
Epoch 99: 100% 1/1 [00:32<00:00, 32.28s/it, v_num=4qd1, train_loss_step=1.780, train_loss_epoch=1.940]
Epoch 99: 100% 1/1 [00:32<00:00, 32.52s/it, v_num=4qd1, train_loss_step=1.780, train_loss_epoch=1.780]`Trainer.fit` stopped: `max_epochs=100` reached.
Epoch 99: 100% 1/1 [00:32<00:00, 32.83s/it, v_num=4qd1, train_loss_step=1.780, train_loss_epoch=1.780]
wandb: 🚀 View run graph_fm-4x64-11_19_12-0257 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/ov9g4qd1
8.7.7. 8.3.7 Question 04: Evaluate models#
After training the models, we should evaluate the GNN models as normal.
Thus, you should check the performance of GraphCast model and Graph-FM model on the validation and test data. Then you can show the matrix of RMSE and one beloning to steps of your model results.
The visualization of test loss (leading time 57h) as below:
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graphcast --eval val
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 70345 nodes (63784 grid, 6561 mesh)
Edges in subgraphs: m2m=57616, g2m=100656, m2g=255136
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: eugeneliuhaokun (eugeneliuhaokun-university-of-lausanne). Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_134023-l311u512
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run eval-val-graphcast-4x64-11_19_13-9033
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/l311u512
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Running evaluation on val
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Testing DataLoader 0: 0% 0/1 [00:00<?, ?it/s]/usr/local/lib/python3.10/dist-packages/cartopy/io/__init__.py:241: DownloadWarning: Downloading: https://naturalearth.s3.amazonaws.com/50m_physical/ne_50m_coastline.zip
warnings.warn(f'Downloading: {url}', DownloadWarning)
Testing DataLoader 0: 100% 1/1 [05:33<00:00, 333.72s/it]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ test_loss_unroll1 │ 9.712401390075684 │
│ test_loss_unroll10 │ 85.80818939208984 │
│ test_loss_unroll15 │ 110.86847686767578 │
│ test_loss_unroll19 │ 180.26170349121094 │
│ test_loss_unroll2 │ 27.829153060913086 │
│ test_loss_unroll3 │ 40.156944274902344 │
│ test_loss_unroll5 │ 33.03308868408203 │
│ test_mean_loss │ 80.15142822265625 │
└───────────────────────────┴───────────────────────────┘
wandb: 🚀 View run eval-val-graphcast-4x64-11_19_13-9033 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/l311u512
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graphcast --eval test
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 70345 nodes (63784 grid, 6561 mesh)
Edges in subgraphs: m2m=57616, g2m=100656, m2g=255136
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: eugeneliuhaokun (eugeneliuhaokun-university-of-lausanne). Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_134639-xje7r1jg
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run eval-test-graphcast-4x64-11_19_13-5323
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/xje7r1jg
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Running evaluation on test
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Testing DataLoader 0: 100% 1/1 [05:38<00:00, 338.74s/it]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ test_loss_unroll1 │ 6.736212730407715 │
│ test_loss_unroll10 │ 71.78734588623047 │
│ test_loss_unroll15 │ 112.95072937011719 │
│ test_loss_unroll19 │ 186.19430541992188 │
│ test_loss_unroll2 │ 19.337446212768555 │
│ test_loss_unroll3 │ 27.81077766418457 │
│ test_loss_unroll5 │ 22.077903747558594 │
│ test_mean_loss │ 74.6421890258789 │
└───────────────────────────┴───────────────────────────┘
wandb: 🚀 View run eval-test-graphcast-4x64-11_19_13-5323 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/xje7r1jg
How about your RMSE of your GraphCast model? How is it like?
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graph_fm --graph hierarchical --eval val
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 71155 nodes (63784 grid, 7371 mesh)
Loaded hierarchical graph with structure:
level 0 - 6561 nodes, 51520 same-level edges
0<->1
- 6561 up edges, 6561 down edges
level 1 - 729 nodes, 5512 same-level edges
1<->2
- 729 up edges, 729 down edges
level 2 - 81 nodes, 544 same-level edges
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: eugeneliuhaokun (eugeneliuhaokun-university-of-lausanne). Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_135316-fqtkowrs
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run eval-val-graph_fm-4x64-11_19_13-2810
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/fqtkowrs
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Running evaluation on val
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Testing DataLoader 0: 100% 1/1 [05:41<00:00, 341.33s/it]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ test_loss_unroll1 │ 9.912364959716797 │
│ test_loss_unroll10 │ 75.36561584472656 │
│ test_loss_unroll15 │ 81.70404815673828 │
│ test_loss_unroll19 │ 144.54150390625 │
│ test_loss_unroll2 │ 28.094867706298828 │
│ test_loss_unroll3 │ 39.867698669433594 │
│ test_loss_unroll5 │ 30.563583374023438 │
│ test_mean_loss │ 66.08300018310547 │
└───────────────────────────┴───────────────────────────┘
wandb: 🚀 View run eval-val-graph_fm-4x64-11_19_13-2810 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/fqtkowrs
!python /content/drive/MyDrive/neural-lam-prob_model_lam/train_model.py --model graph_fm --graph hierarchical --eval test
Global seed set to 42
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Loaded graph with 71155 nodes (63784 grid, 7371 mesh)
Loaded hierarchical graph with structure:
level 0 - 6561 nodes, 51520 same-level edges
0<->1
- 6561 up edges, 6561 down edges
level 1 - 729 nodes, 5512 same-level edges
1<->2
- 729 up edges, 729 down edges
level 2 - 81 nodes, 544 same-level edges
wandb: Using wandb-core as the SDK backend. Please refer to https://wandb.me/wandb-core for more information.
wandb: Currently logged in as: eugeneliuhaokun (eugeneliuhaokun-university-of-lausanne). Use `wandb login --relogin` to force relogin
wandb: Tracking run with wandb version 0.18.7
wandb: Run data is saved locally in ./wandb/run-20241119_135933-1iy3pnd1
wandb: Run `wandb offline` to turn off syncing.
wandb: Syncing run eval-test-graph_fm-4x64-11_19_13-3579
wandb: ⭐️ View project at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam
wandb: 🚀 View run at https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/1iy3pnd1
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Running evaluation on test
[rank: 0] Global seed set to 42
Initializing distributed: GLOBAL_RANK: 0, MEMBER: 1/1
----------------------------------------------------------------------------------------------------
distributed_backend=nccl
All distributed processes registered. Starting with 1 processes
----------------------------------------------------------------------------------------------------
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
/usr/local/lib/python3.10/dist-packages/torch/utils/data/dataloader.py:560: UserWarning: This DataLoader will create 4 worker processes in total. Our suggested max number of worker in current system is 2, which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Testing DataLoader 0: 100% 1/1 [05:47<00:00, 347.74s/it]
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┃ Test metric ┃ DataLoader 0 ┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━┩
│ test_loss_unroll1 │ 6.805522918701172 │
│ test_loss_unroll10 │ 60.32221984863281 │
│ test_loss_unroll15 │ 82.2379379272461 │
│ test_loss_unroll19 │ 140.77517700195312 │
│ test_loss_unroll2 │ 19.519119262695312 │
│ test_loss_unroll3 │ 27.52130699157715 │
│ test_loss_unroll5 │ 19.04291534423828 │
│ test_mean_loss │ 57.93271255493164 │
└───────────────────────────┴───────────────────────────┘
wandb: 🚀 View run eval-test-graph_fm-4x64-11_19_13-3579 at: https://wandb.ai/eugeneliuhaokun-university-of-lausanne/neural-lam/runs/1iy3pnd1
Did you get the similar RMSE for your Graph-FM model as below?
Congratulations! You have completed this exercise for GNN. Now you know how to load the existing models and features to train GNNs to predict weathers. Besides, based on the plots of wandb platform, could you explore and explain more for both models?