Skip to main content

My GSOC workflow

My GSOC workflow

I have talked about  what my project is about, and some weeks have passed since the start of the coding period, however, now I realize I have yet to talk about what GSoC is and what is my daily routine as a participant. This post will have two main sections, one little introduction about GSoC (all the info about the program is available on its official site), and a second larger section on my daily work routine.

GSoC

Google Summer of Code is a global program organized by Google Open Source. Its goal is at the same time to support students by giving them the opportunity to work as programmers during their summer and to empower and increase the open source software community. It started in 2005 with 200 students, growing every year.

GSoC has two selection phases, one for free open source projects and another for students. Once the organizations have been selected, it is up to the organizations alone to choose which students are accepted. Google’s role is to assign the number of student slots per organization, not who fills them. I have not been able to find much information about the slot assignment process, but it looks like it depends on the number of proposals received and the total number of students each organization is able to mentor. Afterwards, during the summer, students code for their open source project.
 

Eventually, there is a final evaluation, were students must also write a summary of their work and links to the code produced.

My workflow

My work for DeepForest during GSoC covers a wide range of tasks that goes even further from all the tasks detailed in the proposal. Hence, organization and planning are vital in order to achieve my goals. Moreover, not only I have to plan all the different tasks and how to divide work but also when to do them.

For task management, GitHub itself already has plenty of features to help me organizing them. When I started coding, I would send a pull request to from my fork to DeepForest main repository. I also include a description of the PR goals and some examples in the first PR message. This methodology allows all DeepForest contributors to know what the PR is about, and to advise and review early on in the coding process. In addition, all continuous integration test are run for most of the commits, making sure I am not breaking any of the other functions in DeepForest. In some cases, I also used checklists (which can be directly embedded in PR messages) to keep track of all features and sub tasks pending in the PR. Eventually, if some related sub task goes beyond the PR, I would create an issue to keep track of these pending tasks.

Afterwards, when the PR reached its end point, before considering it ready for merging, I would also test the code in some relish examples, not only in the automated continuous integration tests. I already have a whole task in the proposal for testing on real examples, which is why these examples are quite simple and I label them as relish, not as real. In some weeks I do plan to get to the real case testing, Doing this I plan to achieve goal which is to prove that all the algorithms and code work in all real cases.


 

Comments

Popular posts from this blog

GSoC Final Report

GSoC Final Report My journey on the Google Summer of Code project passed by so fast, A lot of stuff happened during those three months, and as I’m writing this blog post, I feel quite nostalgic about these three months. GSoC was indeed a fantastic experience. It gave me an opportunity to grow as a developer in an open source community and I believe that I ended up GSoC with a better understanding of what open source is. I learned more about the community, how to communicate with them, and who are the actors in this workflow. So, this is a summary report of all my journey at GSoC 2022. Name : Ansh Dassani Organization:   NumFOCUS- Data Retriever Project title : Training and Evaluation of model on various resolutions Project link:  DeepForest Mentors :  Ben Weinstein ,  Henry Senyondo , Ethan White Introduction                                        DeepForest is a pytho...

Deep Learning

What is deep learning? Deep learning is one of the subsets of machine learning that uses deep learning algorithms to implicitly come up with important conclusions based on input data. Genrally deeplearning is unsupervised learning or semi supervised learning and is based on representation learning that is a set of techniques that allows a system to automatically discover the representations needed for feature detection or classification from raw data. This replaces manual feature engineering and allows a machine to both learn the features and use them to perform a specific task, it learns from representative examples. For example: if you want to build a model that recognizes trees, you need to prepare a database that includes a lot of different tree images. The main architectures of deep learning are: -Convolutional neural networks -Recurrent neural networks -Generative adversarial networks -Recursive neural networks I'll be talking about them more in later part of this blog. Diffe...

Sensitivity of model to input resolution

Sensitivity of the model to resolution The Deepforest model was trained on 10cm data at 400px crops is way too sensitive to the input resolution of images and quality of images, and it tends to give inaccurate results on these images and it's not possible to always have images from drones from a particular height that is 10cm in our case, so we have to come up with a solution for how to get better results for multiple resolutions of data. So we have two solutions to get better predictions, which can be preprocessing of data and retraining the model on different input resolutions. In preprocessing what we can do is to try to get nearby patch size to give better results as the resolution of the input data decreases compared to the data used to train the model, the patch size needs to be larger and we can pass the appropriate patch size in ```predict_tile``` function, but retaining of the model on different resolutions can be more amount of work but yes we tried to achieve it by evalu...