|Job Type:||Full Time|
Nearmap is unique. An Australian global technology company with incredible people.
At Nearmap, we have tens of petabytes of high quality aerial imagery (covering half a million square kilometres a year at 5-7cm resolution, and regularly captured imagery back to 2009). We produce automated 3D models of entire cities, and have recently launched our new product, Nearmap AI, which turns our visual content into semantic information to power decisions in a wide range of organisations. As a publicly listed company focussed on growth, we have both the resources to allow you to succeed in your role, and the agility (thanks to cloud-based infrastructure) to rapidly take advantage of the latest developments in the field. Nearmap is continually evolving, and you’ll need to thrive in a fast paced environment that changes rapidly.
We are recruiting for a Machine Learning Ops Engineer to join the Output Data and Applications team within the AI Systems group. You’ll be designing, building, and operating software systems that take petabytes of data to transform imagery to insight. Our technology stack is based on the python scientific libraries and traverses deep learning technology such as Tensorflow and Pytorch, cloud native technology such as Kubernetes, Kubeflow and Kafka, and GIS tools such as the Shapely and GeoPandas libraries. We are committed to software best practices including infrastructure as code, GitOps, CI/CD, and as much automation as makes sense.
You will work with an incredibly passionate and talented team of Data Scientists, Machine Learning Engineers and Data Engineers, and your work will interface with systems that run on Kubernetes clusters with thousands of GPUs, train large deep learning models with novel architectures, and produce sophisticated data products for a broad range of customers. The AI Systems group mixes data science, ML engineering and software engineering, and is comprised of three teams divided along responsibility lines, rather than skill sets: Data Engine, Model R&D, and Output Data and Applications.
The Output Data and Applications team picks up the trained deep learning models, and executes them as part of a complex DAG of post processing operations and models to create the data products that power Nearmap AI. We operate at very large scale, running our algorithms to generate content that spans tens of thousands of square kilometres per day. The post processing chain is built on Kubernetes and Kubeflow, and includes a range of machine learning models to perform tasks such as building footprint vectorisation and storey estimation from Nearmap’s 3D mesh, with automated quality control and human-in-the-loop review. We design, build and maintain our own infrastructure, algorithms and applications, and produce product focussed data that meets customer needs in insurance, local government, roofing and other industries.
A typical day will look like this
- Participation in the design and scoping of greenfield projects
- Work within a team to deliver end-to-end technical solutions — typically starting with spike sessions, onto architectural design and test creation, iteration on the solution, and ultimately deploying to production.
- Commitment to software best practices and a strong culture of peer review.
- May be required to supervise associates.
To achieve our goals, we have 1000 node Kubernetes clusters on AWS with Kubeflow orchestration on top of which we run middleware comprised of Golang/gin and python/FastAPI microservices. The successful candidate will be working with our team and the Nearmap DevOps team to own the deployment of Kubeflow and middleware into the clusters, as well as taking an active part building up best practices for MLOps processes including things like dynamic configuration of our ML applications, discoverability of tasks that have already run, and closing the loop between production data and labelled data generation. The MLOps language of choice is python and this is the core language of the team; however, the ideal candidate knows or is willing to learn Golang with the support of an existing senior engineer and infra languages such as Terraform with the support of DevOps.
We're after exceptional candidates, who have real-world experience but are still eager to learn.
- Programming/Tech Environments: Ability to code using a Linux environment, and git for source control. Commitment to software engineering principles, a keen eye for clean code, and a passion for robustness and correctness.
- Software Engineering: Working on shared codebases to produce production quality code deployed as Docker containers.
- Systems thinking: Consider end-to-end solutions that are comprised of many parts each of which may be a stand-alone Machine Learning application.
- MLOps: Champion best practice for versioning, deployment, testing and automation for models and code.
- Domain Knowledge – Machine Learning: Working on Machine Learning problems applied at scale.
- Infrastructure management: experience with infrastructure-as-code tools like Terraform or CloudFormation.
- Scale: Working with large computational DAGs, where data sets don’t fit into memory, and require multiple nodes to compute efficiently.
- Cloud Computing: Working on AWS or GCP using distributed virtual machines, Kubernetes, etc. is mandatory for some roles, beneficial for others.
- Scientific Approach: Follow the scientific method of formulating hypotheses, and applying statistical tests to validate them.
- Pragmatism: While extensive knowledge of theory and best practices are highly valued, pragmatism wins over elaborate theory when it comes to shipping products that work.
- Collaboration: data science is a team sport, communicate well, share knowledge, and be open to taking on ideas from anyone in the team.
- Attention to detail: Showing attention to detail when it counts is important... to be considered for this role, [click this link](https://www.dropbox.com/s/u4janrzqgfxn61l/test.tar.gz?dl=0) and apply some basic data science skills (an astute software engineer should be able to solve it quickly with a little googling!).
Some of our benefits
Nearmap takes a holistic approach to our employees’ emotional, physical and financial wellness. Some of our current benefits include competitive pay, access to the Nearmap employee share scheme, short and long-term financial incentives, flexible working options, paid volunteer days, gym and phone rebates, and lots of development opportunities including hack-a-thons and pitch-fests.
If you can see yourself working at Nearmap and feel you have the right level of experience, we invite you to get in touch.
Watch some presentations on what we do in the AI Systems group:
Nearmap does not accept unsolicited resumes from recruitment agencies and search firms. Please do not email or send unsolicited resumes to any Nearmap employee, location or address. Nearmap is not responsible for any fees related to unsolicited resumes.