spark scala developer resume

Participate in the design and implementation of a large and architecturally significant application Communicate effectively with both business and technical audiences Build partnerships across application, business and infrastructure teams Worked on Talend Open Studio and Talend Integration Suite. Environment : Java, JSP, HTML, CSS, RAD, JDBC JavaScript, Jboss, Struts, Servlets, Web Sphere, Windows XP, Eclipse, JavaScript, Apache Tomcat, EJB, XML, SOA. Developed Spark scripts by using Scala Shell commands as per the requirement. Creating files and tuned the SQL queries in Hive utilizing HUE. Worked with HiveQL on big data of logs to perform a trend analysis of user behavior on various online modules. Involved in creating Hive tables, Pig tables, and loading data and writing hive queries and pig scripts. Design, develop and modify data workflow software system using scientific analysis and mathematical models to predict and measure outcome and handle the consequences of design… services (aws) including amazon s3, amazon elastic mapreduce (emr), amazon athena and bigdata ecosystem tools including hadoop, hive, hdfs, spark (pyspark… Please provide a type of job or location to search! | Cookie policy, Having 8 years of experience in IT industry implementing, developing and maintenance of various, Strong knowledge of Hadoop Architecture and Daemons such as. The Overflow Blog How to write an effective developer resume: Advice from a … ATS-friendly Python developer resume template. Implemented the JMS Topic to receive the input in the form of XML and parsed them through a common XSD. Modified JavaScript for handling the access privileges, Extensively written COREJAVA & Multi-Threading code in application. Involved in performance tuning where there was a latency or delay in execution of code. W3Global Inc. Toronto, ON. Worked on different file formats (ORCFILE, TEXTFILE) and different Compression Codecs (GZIP, SNAPPY, LZO). Power phrases for your Spark skills on resume Experience in manipulating/analyzing large datasets and finding patterns and insights within structured and … Involved in converting MapReduce programs into Spark transformations using Spark RDD in Scala. Overall 9+ years of professional IT experience with 5 years of experience in analysis, architectural design, prototyping, development, Integration and testing of applications using Java/J2EE Technologies and 3 years of experience as Hadoop Developer. Developed Hive queries and UDFS to analyze/transform the data in HDFS. Spark Developer Resume Example 2 CAREER OBJECTIVE With 18+ years of total industry experience & Seasoned IT professional with over 10 years of experience in Business Intelligence field. Implemented Spark using Scala and utilizing Data frames and Spark SQL API for faster processing of data. Load the data into Spark RDD and do in memory data Computation to generate the Output response. Able to work on own initiative, highly proactive, self-motivated commitment towards work and resourceful. Apply securely with Indeed Resume: As a Sr. Scala Developer with hands-on experiences in Scala and a deep understanding of the AdTech domain, you will develop, maintain, evaluate, ... We are looking for spark and scala developer and location is bangalore package is open. Develop Spark/MapReduce jobs to parse the JSON or XML data. Implement counters on HBase data to count total records on different tables. Written JDBC statements, prepared statements, and callable statements in Java, JSPs and Servlets. *Experience data processing like collecting, aggregating, moving from various sources using Apache Flume and Kafka Involved in the analysis, design, and development phase of Software Development Lifecycle. Written stored procedures for those reports which use multiple data sources. Used Scala libraries to process XML data that was stored in HDFS and processed data was stored in HDFS. Implemented SQOOP for large dataset transfer between Hadoop and RDBMS. Should have knowledge of containers and basic work experience. Analyzed the SQL scripts and designed the solution to implement using PySpark. Environment: Java, J2EE, Servlets, JSP, Struts, Spring, Hibernate, JDBC, JNDI, JMS, JavaScript, XSLT, DTD, SAX, DOM, XML, UML, TOAD, Jasper Reports, Oracle,, Eclipse RCP, IBM Clear Case, JIRA, WebSphere, Unix/Windows. The recruiter has to be able to contact you ASAP if they like to offer you the job. *Migrating the coding from Hive to Apache Spark and Scala using Spark SQL, RDD. *Experience in working with flume to load the log data from multiple sources directly into HDFS. Worked with join patterns and implemented Map side joins and Reduce side joins using Map Reduce. Responsible for enhancement for mutual funds products written in Java, Servlets, Xml and XSLT. Writing the HIVE queries to extract the data processed. This is why you need to provide your: First and last name; Email; Telephone number Responsible for coding and deploying according to the Client requirements. Experience in Importing and exporting data into HDFS and, Experienced in handling data from different data sets, join them and preprocess using. Spark, Scala or Python or Java, Impala, Hive or other related technologies Hands on programming experience in Spark including but not limited to … Good understanding of Cassandra architecture, replication strategy, gossip, snitch etc. Implemented data pipeline by chaining multiple mappers by using Chained Mapper. Introduction. Strong understanding of Hadoop eco system such as HDFS, MapReduce, HBase, Zookeeper, Pig, Hadoop streaming, Sqoop, Oozie and Hive. Browse other questions tagged scala apache-spark apache-spark-sql user-defined-functions or ask your own question. Responsibilities: Analyze and define researcher’s strategy and determine system architecture and requirement to achieve goals. ETL Developer Resume. Extensive Involvement in analyzing the requirements and detailed system study. Any developer in the big data world should be smart enough to learn a programming language that has some complexity. *Experience in designing the User Interfaces using HTML, CSS, JavaScript and JSP. *Hands-on knowledge on core Java concepts like Exceptions, Collections, Data-structures, Multi-threading, Serialization and deserialization. Responsible for developing scalable distributed data solutions using Hadoop. Involved in story-driven agile development methodology and actively participated in daily scrum meetings. Spark’s great power and flexibility requires a developer that does not only know the Spark API well: They must also know about the pitfalls of distributed storage, how to structure a data processing pipeline that has to handle the 5V of Big Data—volume, velocity, variety, veracity, and value—and how to turn that into maintainable code. This Java Developer Resume article, will help you in crafting an impressive resume when you are applying for a Java Developer role. *Experience in transferring data from RDBMS to HDFS and HIVE table using SQOOP. Involved in performing the analytics and visualization for the data from the logs and estimate the error rate and study the probability of future errors using regressing models. SCJP 1.4 Sun Certified Programmer. Responsible in Implementing advanced procedures like, Experience in extracting appropriate features from data sets in order to handle, Collected data using Spark Streaming from. Download your resume, Easy Edit, Print it out and Get it a ready interview! Created database maintenance planner for the performance of SQL Server, which covers Database integrity checks, update Database statistics and re-indexing. We’ve collected 25 free realtime HADOOP, BIG DATA, SPARK, Resumes from candidates who have applied for various positions at indiatrainings. Involved in design of the application database schema, Written complex SQL queries, stored procedures, functions and triggers in PL/SQL. Involved in converting Hive/SQL queries into Spark transformations using Spark RDDs, Scala and Python. Acted for bringing in data under HBase using HBase shell also HBase client API. Experienced in running Hadoop streaming jobs to process terabytes data. Extensively used Core Java such as Multithreading, Exceptions, and Collections. *Worked on HBase to perform real time analytics and experienced in CQL to extract data from Cassandra tables. Development of common application level client side validation, using JavaScript, Developed & Deployed the Application in the IBM WebSphere Application Server. Involved in developing a linear regression model for predicting continuous measurement. Imported data from AWS S3 and into Spark RDD and performed transformations and actions on RDD's. Used Spark-SQL to Load JSON data and create Schema RDD and loaded it into Hive Tables and handled Structured data using Spark SQL. Developing Spark programs using Scala API's to compare the performance of Spark with Hive and SQL. Used Rational Application Developer (RAD) for developing the application. Defined the Accumulo tables and loaded data into tables for near real-time data reports. Having experience in developing a data pipeline using. Environment: Hadoop, HDFS, Spark, MapReduce, Hive, Sqoop, Kafka, HBase, Oozie, Flume, Scala, AWS, Python, Java, JSON, SQL Scripting and Linux Shell Scripting, Avro, Parquet, Hortonworks. Wells Fargo - Charlotte, NC. Used WEB HDFS REST API to make the HTTP GET, PUT, POST and DELETE requests from the webserver to perform analytics on the data lake. The statement “Scala is hard to master” is definitely true to some extent but the learning curve of Scala for Spark is well worth the time and money. Involved in performance tuning of spark applications for fixing right batch interval time and memory tuning. Spark/Scala Developer. Created Hbase tables to store variable data formats of data coming from different Legacy systems. Implemented Apache PIG scripts to load data from and to store data into Hive. Converted the existing reports to SSRS without any change in the output of the report. Load and transform large sets of structured, semi structured data using hive. Adequate knowledge and working experience with Agile and waterfall methodologies. Spark Developer Apr 2016 to Current. Collected and aggregated large amounts of web log data from different sources such as webservers, mobile and network devices using Apache. SCJP 1.4 Sun Certified Programmer. Implemented different J2EE Design Patterns such as Session Facade, Observer, Observable and Singleton, Business Delegate to accommodate feature enhancements and change requests. Used Impala for querying HDFS data to achieve better performance. Imported required tables from RDBMS to HDFS using Sqoop and also used Storm and Kafka to get real time streaming of data into HBase. Privacy policy Implemented Spark using Scala and SparkSQL for faster testing and processing of data. DOWNLOAD THE FILE BELOW . Big Data Developer Resume Samples and examples of curated bullet points for your resume to help you get an interview. Created ETL Mapping with Talend Integration Suite to pull data from Source, apply transformations, and load data into target database. Created new database objects like Tables, Procedures, Functions, Triggers, and Views using T- SQL. Involved in evaluation and analysis of Hadoop cluster and different big data analytic tools including Pig, Hbase database and Sqoop. Environment: Hadoop, Cloudera Manager, Linux, RedHat, Centos, Ubuntu Operating System, Map Reduce, Hbase, Sqoop, Pig, HDFS, Flume, Pig, Python. Used TOAD to check and verify all the database turnaround times and also tested the connections for response times and query round trip behavior. • Exploring with the Spark 1.4.x, improving the performance and optimization of the existing algorithms in Hadoop 2.5.2 using Spark Context, SparkSQL, Data Frames. Hands on experience in Sequence files, RC files, Combiners, Counters, Dynamic Partitions, Bucketing for best practice and performance improvement. Objective : Over 8+ years of experience in Information Technology with a strong back ground in Analyzing, Designing, Developing, Testing, and Implementing of Data Warehouse development in various domains such as Banking, Insurance, Health Care, Telecom and Wireless. Used SQL queries to perform backend testing on the database. In depth understanding/knowledge of Hadoop Architecture and various components such as HDFS, Application master, Node Manager, Resource Manager, NameNode, DataNode and MapReduce concepts. Developed HBase data model on top of HDFS data to perform real time analytics using Java API. Implemented Flume to import streaming data logs and aggregating the data to HDFS. To make your resume shine, write in an active language, and utilize direct action verbs. Worked on root cause analyses for all the issues that occur in batch and provide the permanent fixes for the issues. It is the first & most crucial step towards your goal. *Expertise in using Spark-SQL with various data sources like JSON, Parquet and Hive. Good knowledge on AWS infrastructure services Amazon Simple Storage Service (Amazon S3),EMR, and Amazon Elastic Compute Cloud (Amazon EC2). Used JIRA as a bug-reporting tool for updating the bug report. Below is sample resume screenshot . Worked on Spring Integration for communicating with business components and also worked on spring with Hibernate Integration for ORM mappings. Imported the data from different sources like AWS S3, LFS into Spark RDD. Developing Spark programs using Scala API's to compare the performance of Spark with Hive and SQL. Implemented Secondary sorting to sort reducer output globally in map reduce. Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS. Written Mapreduce code that will take input as log files and parse the logs and structure them in tabular format to facilitate effective querying on the log data. Overall 8-10 years of IT experience with 4-6 years of Spark/Scala programming experience. *Experience in usage of Hadoop distribution like Cloudera 5.3(CDH5,CDH3), Horton works distribution & Amazon AWS •Expertize with the tools in Hadoop Ecosystem including Pig, Hive, HDFS, MapReduce, Sqoop, Storm, … Worked on Implementing and optimizing Hadoop/MapReduce algorithms for Big Data analytics. Used Kafka for log accumulation like gathering physical log documents off servers and places them in a focal spot like HDFS for handling. I took only Clound Block Storage source to simplify and speedup the process. Resume is your first impression in front of an interviewer. © 2020 Hire IT People, Inc. Wrote JUnit test cases for system testing, Used Log4j for logging. Also, try to include precise numbers for the results you achieved in your previous roles. Involved in HBASE setup and storing data into HBASE, which will be used for analysis. Hadoop Developer Resume. *In-depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, Spark MLib Debugging and identifying issues reported by QA with the Hadoop jobs by configuring to local file system. Spark/Hadoop Developer resume in Piscataway Township, NJ, 08854 - October 2016. Used Web sphere Application Server for deploying the application. Get Started Today! Experienced in performing real time analytics on HDFS using. Worked on Session Tracking in JSP, Servlets. The section contact information is important in your scala developer resume. Used Spark for interactive queries, processing of streaming data and integration with popular NoSQL database for huge volume of data. Upload your resume - Let employers find you. Implemented the workflows using Apache Oozie framework to automate tasks. PROFESSIONAL SUMMARY. Actively participated in software development lifecycle (scope, design, implement, deploy, test), including design and code reviews. This document describes sample process of implementing part of existing Dim_Instance ETL.. Big Data Ecosystems: Hadoop, Map Reduce, HDFS, Zookeeper, Hive, Pig, Sqoop, Oozie, Flume, Yarn, Spark, NiFi, Scripting Languages: JSP, Servlets, JavaScript, XML, HTML, Python, Application Servers: Apache Tomcat, Web Sphere, Web logic, JBoss, Environment: HDP 2.3.4,Hadoop, Hive, HDFS, HPC, WEBHDFS, WEBHCAT, Spark, Spark-SQL, KAFKA, AWS,Java, Scala, Web Server’s, Maven Build and SBT build, Rally(CA Technologies), Environment: Apache Spark, Hadoop, HDFS, Hive, Kafka, Sqoop, Scala, Talend, Cassandra, Oozie, Cloudera, Impala, linux, Oozie. Hands on experience using JBOSS for the purpose of EJB and JTA, and for caching and clustering purposes. Developed user interface using JSP, HTML, CSS and Java Script to simplify the complexities of the application. Used Spark-SQL to Load JSON data and create Schema RDD and loaded it into Hive Tables and handled Structured data using SparkSQL. Representative Scala Developer resume experience can include: A strong appreciation of how to test applications efficiently and effectively Proven experience in building in Data Driven applications using a combination of Java/Scala and the Spark framework Involved in moving all log files generated from various sources to HDFS for further processing through Flume. *Good level of experience in Core Java, J2EE technologies as JDBC, Servlets, and JSP. Used Sqoop to efficiently transfer data between databases and HDFS and used Flume to stream the log data from servers. Used Spark API over Hortonworks Hadoop YARN to perform analytics on data in Hive. Followed Scrum approach for the development process. Installed and configured Flume, Hive, Pig, Sqoop and Oozie on the Hadoop cluster. Migration of ETL processes from Oracle to Hive to test the easy data manipulation. Experience processing Avro data files using Avro tools and MapReduce programs. PROFESSIONAL SUMMARY: • * + years of overall IT experience in a variety of industries, which includes hands on experience of 3+ years in Big Data technologies and designing and implementing Map Reduce. Apply to Developer, Hadoop Developer, Python Developer and more! Career Objective: Implemented pre-defined operators in spark such as map, flat Map, filter, reduceByKey, groupByKey, aggregateByKey and combineByKey etc. Implemented Spring (MVC) design paradigm for website design. Resumes, and other information uploaded or provided by the user, are considered User Content governed by our Terms & Conditions. Optimized SAX and DOM parsers for XML production data. An information technology professional having overall 8+ years of IT Experience which including 4 years of experience in Big Data development. Developed presentation layer of the project using HTML, CSS, JSP and JavaScript technologies. Just Three Simple Steps: Click on the Download button relevant to your (Fresher, Experienced).

Lake Matheson Accommodation, Dark Souls Fire Keeper Soul Locations, How To Draw Chicken Feet, Warm Audio Wa-14 Review, Best Lens For Wildlife Photography Nikon D3500, Whirlpool Blu-ray Review,

Comments for this post are closed.