Salt Lake City – hybrid onsite and remote 6 month contract
Data Science Developer
Detailed Job Description – highlight 3-5 Must Haves
The role we need to fill is a Data Science Developer, but they are also known as Machine Learning Engineers in the industry. The person will be focused on our data pipelining work for machine learning models. I anticipate we will need them for 6 months.
I will wait to see some feedback on the new analysis for rates around this role for the Salt Lake area before I comment on the cap if that is ok. I do think we can come to a competitive rate to fill this quickly. I agree that the approach is to try for SLC and if we can’t find enough candidates, we expand the territory.
Requires a bachelor’s computer science, computer engineering, computer information systems, and or related field and 6+ years’ experience ETL, Ruby, Python, and Java Big data distributed systems, understanding of SQL and NoSql data stores orchestration tools and processes or other directly related experience. Prior experience within the financial sector preferred. A combination of education and experience may meet qualifications.
Extensive knowledge of various programing languages, including R, Ruby, Java, and Python.
Solid understanding of relational databases and SQL, No SQL data stores and ETL development.
Extensive knowledge of software engineering or DevOps.
Advanced knowledge of data modeling concepts and integrated application development methodology using tools like Apache Spark, Hadoop, Hive, and Apache Kafka.
Extensive knowledge of fault tolerant software design, highly scalable and high-performance software development.
Must have excellent analysis, judgment, project management and collaboration skills. Ability to communication both verbally and in writing with both technical and non-technical staff. Ability to work in a team environment and have excellent interpersonal skills. Ability to adapt to changing technology and priorities. Must be able to work independently with an ability to prioritize and manage projects effectively. Must be able to interpret, validate, and map business requirements to an appropriate solution.
Our data platform provides real time streaming, batch processing and pipeline orchestration, data lake management, and data cataloging.