Mac setup for PyTorch, fastai, transformers with GPU support in pure pip (no Conda)
How to set up PyTorch on Apple sillicon Macs with minimum pain and no need for Conda (2024)
By Przemek, last update
, Jeremy Howard suggests using Conda for managing the local installation of PyTorch. The advantage of this approach is that conda creates a fully hermetic environment, and the resulting installation comes with working GPU support.
Personally I prefer to use pip to manage packages. The good news is as of 2024, a pip-based setup can also be hermetic and have working GPU support! Here’s how to set it up on Apple Sillicon Macs 💫.
First things first, we need a Python installation to run all these magical
artificial intelligence tools.
Don’t use the system installation of Python. It’s often old and it’s used for
operating system needs. We don’t want to mess with it and definitely we don’t
want to sudo install any packages.
Tip: As recommended
get the literally latest Python version, as it sometimes has compatibility
issues with libraries not ready to support it. Instead, let’s get the almost
latest one. As of October 2023 the latest is 3.12, so I got 3.11.
After the installation, put something like:
in your .bashrc / .zshrc, and we’re good for this step.
Now let’s create a virtual Python environment.
We want to use a separate hermetic Python environment for each project, so that
they don’t interfere with each other.
The only package that we will install globally, in the Python environment we
just set up, is the package needed to manage virtual environments:
$ python3.11 -m pip install --user virtualenv
Then we can go to the project directory and start the new virtual environment:
$ python3.11 -m venv .env
From now on, make sure that any time we run any Python code, command or tool
with any relation to the project, we’re doing this in a terminal where we first
activated our environment. We need to do it once for every terminal session we
On the PyTorch website
, I went with
the following configuration with seems to be pre-selected by default:
Make sure to run this and other pip commands with the virtual environment activated as noted above.
$ pip3 install torch torchvision torchaudio
That’s it, no special dance is needed to get the Apple Sillicon GPU support
using Metal. If you like to double check, run the following:
# Check that MPS is availableifnot torch.backends.mps.is_available():
print("MPS not available because the current PyTorch install was not ""built with MPS enabled.")
print("MPS not available because the current MacOS version is not 12.3+ ""and/or you do not have an MPS-enabled device on this machine.")
Let’s see how local execution on M1 Macbook with MPS GPU acceleration compares with running the same code using the Nvidia P100 GPU on Kaggle.
RuntimeError: MPS backend out of memory (MPS allocated:
2.42 GB, other allocations: 15.38 GB, max allowed: 18.13 GB).
Tried to allocate 375.29 MB on private pool.
Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper
limit for memory allocations (may cause system failure).
I tried to be smarter and set the PYTORCH_MPS_HIGH_WATERMARK_RATIO to a higher value that nevertheless keeps the memory capped, for example 0.5. However, this:
Results in another error in trainer initialization:
RuntimeError: invalid low watermark ratio 1.4
I have no idea why setting PYTORCH_MPS_HIGH_WATERMARK_RATIO to 0.5 causes this other internal variable to be set to 1.4 (and why this is not OK). In the end the only way I found to get all of this to work was to follow the instruction from the error message and take the risk of unbounded RAM use.