Photon is a developer friendly framework for Deep Learning in Julia. Under the hood it leverages Knet and it provides a convenient lAPI on top of that. The main goal was to enable fast prototyping and get reproducable results.
You can install Photon as any other package in Julia. From the Julia REPL, type ]
to enter the Pkg REPL mode and run:
pkg> add Photon
You can also install it from the master branch:
pkg> add https://github.com/neurallayer/Photon.jl
Click here to go to the documentation site.
Defining a model is straightforward and should look familiar if you used Keras or MXnet in the past:
A two layers fully connected network:
model = Sequential(
Dense(256, relu),
Dense(10)
)
A convolutional network with maxpooling (note that Photon takes care of the flattening so you can connect a Dense layer directly to a convolutional layer):
model = Sequential(
Conv2D(16, 3, relu),
Conv2D(16, 3, relu),
MaxPool2D(),
Dense(256, relu),
Dense(10)
)
Or a recurrent LSTM network:
model = Sequential(
LSTM(256, 3),
Dense(64, relu),
Dense(10)
)
And also the training of the model can be done through an easy API:
workout = Workout(model, some_loss_function)
train!(workout, data, epochs=10)
The combination of Deep Learning and Julia is a very performant one. Especially when running the models on a GPU. The performance is close to the best that the Python eco system has to offer.
The main goal is to provide a user friendly API for Machine Learning that enables developing both prototype and production ready solutions while remaining fast.
Some of the key features:
-
Good support for the various ways you can to retrieve and transform your data.
-
Develop models with a minimal amount of code.
-
Get all the required insights and visualizations into the performance of the model.
There remain many things to do.
- Extend unit tests to cover more code (> 90%)
- Implement more models including trained weights (resnet,...)
- Write tutorials and improve code documentation
- Finalise the API's so we can release 1.0
- Implement more complex building blocks like transformers
- Additional loss functions
And b.t.w, we are always open in accepting contributions ;)
Photon is provided under the MIT open source license.
We used several other open source frameworks for code and inspiration
-
Knet (pronounced "kay-net") is the Koç University deep learning framework implemented in Julia by Deniz Yuret and collaborators. It is the back-end for Photon, partially due to its excellent performance on GPU's.
-
Flux, we use it for inspiration. This has to be one of the most beautiful code bases out there.
-
Keras and MXNet for their well thought out API's. Also copied some of their excellent documentation for layers and losses.
-
And of course Julia, that enables us to write fast deep learning applications.