Anand Sunder
Sr. Splunk Developer and Administrator
http://splunkanand@gmail.com
Summary:
- Around 8+ years of experience in Information Technology field with strong experience as Splunk developer, Software Analysis, Design and Development for various software applications in providing Business Intelligence Solutions in Data Warehousing for decision Support Systems, and Database Application Development.
- Around 2 years of experience in Operational Intelligence using Splunk 5.x, Splunk 6.x.
- Expert in Extracting, Transforming, Analyzing, Visualizing, and presenting data from diverse business areas in novel and insightful ways to enable Directors, Vice Presidents, and C–level executives to take informed action.
- Good experience in all facets of SDLC viz. requirement analysis, designs, development, testing, and post implementation revisions.
- Good knowledge about Splunk architecture and various components (indexer, forwarder, search heads, deployment server), Heavy and Universal forwarder, License model.
- Expertise in Preparing, arranging and testing the Splunk search strings and operational strings.
- Worked on large datasets to generate insights by using Splunk.
- Experience in developing Splunk queries and dashboards targeted at understanding application performance and capacity analysis.
- Experience in working with Splunk authentication and permissions and having significant experience in supporting large scale Splunk deployments.
- Expert in installing SPLUNK apps for Linux and UNIX environments.
- Knowledge on Configuration files in Splunk (props.conf, Transforms.conf, Output.confg)
- Extensive knowledge in creating Actuate reports using XML, Dashboards, visualization and pivot tables for the business users with Jasper soft software.
- Extensive experience in Data Warehouse, Data mart, Data Integration and Data Conversion projects ETL using Informatica Power Center 9.5/8.x/7.x/6.2/5.0 tools (Source Analyzer, Mapping Designer, Mapplet Designer, Transformation Designer, Repository Manager, and Server Manager) as ETL tool on Oracle /DB2 Database.
- Experience in Big Data and familiar with components of Hadoop Ecosystem: HDFS, Hive, HBase, Pig. Expertise in Hadoop Application Development.
- Extensive experience in writing Packages, Stored Procedures, Functions and Database Triggers using PL / SQL and UNIX Shell scripts. Also handled Oracle utilities like SQL Loader, import etc.
- Experienced in all data processing phases, from the Enterprise Model, Data Model (Logical and Physical Model), and Data Warehousing (ETL).
- Knowledge on service oriented architecture (SOA), workflows and web services using XML, SOAP, and WSDL.
- Expertise in RDBMS like Oracle, MS SQL Server, MySQL and DB2.
- Experienced in Various BI Tools like TIBCO Jasper soft, Tableau for designing customized interactive and advanced rich visualization dashboards using different connectors, extensions, marks, Action, filters, parameter, calculations and Relationships.
- Working knowledge of data warehouse techniques and practices, experience including ETL processes, dimensional data modeling (Star Schema, Snowflake Schema, FACT & Dimension Tables), OLTP and OLAP.
- Excellent analytical, coordination, interpersonal, leadership, organizational and problem solving skills, Ability to adapt, learn new technologies and get proficient in them very quickly.
Technical Experience:
Languages :
| C, T–SQL, java, C++
|
Operating Systems
| Windows 95/98/2000/XP/NT/2008, MS DOS, LINUX, UNIX 7
|
Tools:
| SPLUNK 6.0, SPLUNK 6.2.2,SPLUNK 6.3.0, Jasper soft, OBIEE, SSRS SAP Business Objects.
|
Databases:
| Oracle 11g/10g/9i/8i, MS SQL Server 2012/2008/2005/2000, Sybase, DB2 MS Access.
|
Data Modeling:
| OLAP, OLTP concepts, Entities, Attributes, Cardinality, CA Erwin DM (9.x/8.x/7.x),Dimensional Data Modeling Conceptual Physical and Logical Data Modeling, ER Models. (Star Schema, Snowflake, FACT–Dimensions)
|
Data Analysis:
| Requirement Analysis, Business Analysis, detail design, data flow diagrams, data definition table, Business Rules, data modeling, system integration.
|
Data Warehousing:
| Informatica Metadata Manager, Informatica Power Center/Power Exchange 9.5/ 9.1/8.5/8.1.1/7.1.2, Informatica Designer ,Workflow Monitor, Mapplet, Transformations,, Workflow Manager
|
Web Technologies:
| XML, VBScript, JavaScript, HTML, SOAP, CSS, UNIX shell script.
|
Professional Experience:
Southwest Airlines, Dallas, TX Jan –2014 to Present
Sr. Splunk Developer Administrator
Southwest Airlines Co. operates Southwest Airlines (Southwest). Southwest is a passenger airline that provides scheduled air transportation in the United States and near–international markets.
Ingest data from various systems into Splunk and build reports/dashboards as well as real–time monitoring of the business information. Southwest Airlines uses Splunk for deriving Operational Intelligence and monitoring of the IT infrastructure like server health, status etc.
Responsibilities:
- Implemented and integrated Splunk cluster to allow users to monitor, chart, and interpret misc. data sources
- Developed multiple Splunk applications that enabled customers to visualize data and key performance indicators
- Designed, deployed and implement Splunk Enterprise for monitoring and alerting on Health Artifact & Imaging Solutions platform.
- Installation of Splunk Forwarders in order to capture logs and data from variety of sources to Splunk Indexers as per system architecture.
- Manage Roles and security in Splunk, creating new users and assigning roles.
- Experience in creating knowledge objects like field extraction, data modeling, and event types.
- Good experience in troubleshooting the Splunk problems like search not running, dashboard not loading, etc.,
- Integrated Service Now with Splunk to generate the Incidents from Splunk.
- Worked on DB Connect configuration for MySQL and MSSQL.
- Field Extraction, Using Ifx, Rex Command and Regex in configuration files.
- Various types of charts Alert settings Knowledge of app creation, user and role access permissions. Creating and managing app, Create user, role, Permissions to knowledge objects.
- Parsing, Indexing, Searching concepts Hot, Warm, Cold, Frozen bucketing.
- Involved in standardizing Splunk forwarder deployment, configuration and maintenance across UNIX and Windows platforms.
- Worked on setting up Splunk to capture and analyze data from various layers Load Balancers, Web servers and application servers.
- Captured data from various front end, middle ware application
- Dashboards were created to monitor the traffic volume across, response times, Errors, Warnings across
- Use techniques to optimize searches for better performance, Search time vsIndex time field extraction. And understanding of configuration files, precedence and working.
- Create dashboard from search, Scheduled searches online search vs scheduled search in a dashboard
- Used Jasper soft software to create an embedded BI platform in to accelerate decision making process which otherwise used transactional data retrieved from MS SQL Server.
Environment: SPLUNK 6.0.7, MS SQL Server 2012, SQL, Linux, UNIX, Oracle 11g,
Fannie Mae, McLean, VA Nov–2013 to Dec–2014
Splunk Developer/Admin
Fannie Mae has a vast infrastructure and needs to be monitored with one tool which enables them to get some BI like reports and generate appropriate alerts. Splunk is deployed for this purpose and data is ingested from applications like Cisco VPN, mcafee web gateway, semantic end protection, Palo Alto networks, Mobile Iron and SharePoint website logs. These logs are ingested in real time and alerts are generated alongside creating appropriate reports for the corresponding teams.
Responsibilities:
- Installation, configuration and deployment of Splunk Forwarders, indexers, search heads and deployment server.
- On boarding of new data into Splunk. Troubleshooting Splunk and optimizing performance.
- Use Splunk Enterprise Security to configure correlation search, key indicators and risk scoring framework.
- Configuration and deployment of Splunk forwarder, search heads, indexer and deployment server.
- Work with Splunk GUI, command line interface and directly with configuration files.
- Configured Splunk multisite indexer cluster for data replication.
- Experience developing Splunk queries and dashboards targeted at understanding application performance and capacity analysis.
- Have Knowledge in various search commands like stats, chart, time chart, transaction, strptime, strftime, eval, where, xyseries table etc. and difference between event stats and stats.
- Experience in working with Splunk authentication and permissions and having significant experience in supporting large scale Splunk deployments.
- Managing Splunk Universal forwarder deployment and configuration. Monitoring and maintaining Splunk performance and optimization after deployment.
- Providing all round support for Splunk forwarder logging issues, troubleshoot servers not forwarding events.
- Monitor and track Splunk performance problems, administrations and open tickets with Splunk if there is need.
- Having experience in understanding of Splunk 5.x and 6.x product, Distributed Splunk architecture and components including search heads, indexers, forwarders, etc;
- Created Dashboards, report, scheduled searches and alerts.
- Resolved configuration based issues in coordination with infrastructure support teams.
- Experience in Extraction on Search time vs Index time field extraction
- Good Understanding of configuration files, precedence and daily work exposure to props.conf, transforms.conf, inputs.conf, outputs.conf and Setting up a forwarder information based on requirement.
- Maintenance of Splunk Environment with multiple Indexers.
- Manage and configure index settings and created event type definitions. Analyzed security based events, risks and reporting instances.
- Developing custom web application solutions for internal ticket metrics reporting.
- Set indexing property configurations, including time zone offset, custom source type rules. Configure regex transformations to perform on data inputs. Use in tandem with props.conf
- Designed core scripts to automate Splunk maintenance and alerting tasks.
- Integrated Service Now with Splunk to generate the Incidents from Splunk.
- Lowering the cost and risk of big data initiates with full featured platform to rapidly explore, analyze and visualizing data in Hadoop
- Analyzing data in Hadoop through data stores visual interactions and using splunk for deeper analysis
- Working on data model relationships in underlying raw data and making it more meaningful and useful to quickly generate charts, visuals and dashboards using pivot.
- Analyzing massive amount of real time data in Hadoop using splunk enterprise operational intelligence.
- Experience in reporting using TIBCO Jasper soft, development, deployment, management and performance tuning of reports in various formats.
- Various types of charts Alert settings Knowledge of app creation, user and role access permissions. Creating and managing app, Create user, role, Permissions to knowledge objects.
- Interact with the data warehousing team regarding extracting the data and suggest the standard data format such that Splunk will identify most of the fields.
- Experienced in using various connectors to extract the data from different data sources.
Environment: Pivotal HD, Linux, UNIX, Oracle 11g, MS SQL server 2012 ,Hbase, Hadoop, Service now, XML Splunk 6.1, 6.2.
SEI Investments – Oaks, PA Jul 2012 to Oct 2013
Informatica Developer
SEIC is a global provider of asset management, investment processing, and investment operations solutions. Investment Manager Services (IMS) offered by SEI uses Netik's data warehouse which have been widely used across the Investment/Investor Services sectors for all aspects of Reporting and Data Management. Data is collected from multiple sources, rationalized, enriched and reconciled using Informatica ETL Tool so that it can be leveraged as the source for reporting.
Responsibilities:
- Responsible for full life cycle development including gathering requirements.
- Analyzing, coding, testing, and assisting with user acceptance testing, production implementation and system support for the IMS Data Warehouse Application.
- Will be a part of interviewing, gathering, documenting requirements and determining project scope from our Users; analyzing user requirements to create system designs, either coding all or sharing in the coding and development, unit testing, system testing, final implementation and post implementation monitoring.
- Extracted data from excel files, high volume of data sets from data files, Oracle, DB2, SalesForce.com(SFDC) using Informatica ETL mappings/SQL PLSQL scripts and loaded to Data Store Area.
- Created complex mappings in Power Center Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, SQL, Union, Lookup, Joiner, XML Source Qualifier, Unconnected lookup transformations.
- Involved in Debugging and Troubleshooting Informatica mappings.
- Populated error tables as part of the ETL process to capture the records that failed the migration.
- Used Informatica Power Center 9.1/9.01/8.6.1 for extraction, loading and transformation (ETL) of data in the data warehouse.
- Implemented various Data Transformations for Slowly Changing Dimensions.
- Hands on experience as an Administrator involving Maintaining the Repository Manager for creating Repositories, user groups, folders and migrating code from Dev to Test, Test to Prod environments
- Working on the changes in the Informatica Mappings and testing them for Analyze and Fix gaps in Daily Jobs related to IMS project.
- Created complex mappings in Power Center Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, SQL, Union, Lookup, Joiner, XML Source Qualifier, Unconnected lookup transformations.
- Created E–mail notifications tasks using post–session scripts.
- Designed and developed unit test cases for system integration testing.
- Involved with the users in the creation of test scripts for the user acceptance testing
- Worked on tuning some individual mappings and SQL Server queries that causing performance bottlenecks.
- Written documentation to describe program development, logic, coding, testing and changes made.
- Working on data request tickets and assisting business users (non technical) to understand the quality of the data.
- Respond and resolve ad hoc requests from application and business users. Extensive documentation on the design, development, implementation, daily loads and process flow of the mappings.
Environment: Informatica Power Center 8.6.1/9/1, Informatica power exchange, Oracle 11g/10g, SQL Server 2005/2008, IBM Mainframe, TSQL, MS Excel, Windows XP/2003/2008, CA Scheduler.
Amerisource Bergen – Chester brook, PA Feb 2011 to Jun 2012
Informatica Developer
AmerisourceBergen is developing a rich internet business intelligent application which is used monitor; forecast various budgets of several AmerisourceBergen Marketing business models and their risk of capturing market in competitive global market. Working as an Informatica application developer with a primary focus on development of several individual BI Applications (modules) like Market Model, Fleet intelligence etc.
Responsibilities:
- Involved in system study, analyze the requirements by meeting the client and designing the complete system.
- Developed mappings/Reusable Objects/Transformation/mapplets by using mapping designer, transformation developer and mapplet designer in Informatica Power Center 8.6.1.
- Created reusable transformations and mapplets and used them in mappings.
- Used Informatica Power Center 8.6.1 for extraction, loading and transformation (ETL) of data in the data warehouse.
- Implemented the slowly changing dimensions (SCD) type1 and type2 to maintain current information and history information in the dimension tables.
- Used Informatica Power Center Workflow manager to create sessions, batches to run with the logic embedded in the mappings.
- Created complex mappings in Power Center Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, SQL, Union, Lookup, Joiner, XML Source Qualifier, Unconnected lookup transformations.
- Created E–mail notifications tasks using post–session scripts.
- Designed and developed unit test cases for system integration testing.
- Involved with the users in the creation of test scripts for the user acceptance testing
- Developed some scripts to take weekly backups of certain tables using korn shell and import/export utilities.
- Worked on tuning some individual mappings and oracle queries that causing performance bottlenecks.
- Written documentation to describe program development, logic, coding, testing and changes made.
- Working on data request tickets and assisting business users (non technical) to understand the quality of the data.
Environment: SQL Server 2005/2008, Informatica Power Center 8.6.1, Oracle 10g/9i, DB2,PL/SQL, Toad, MS Access, Windows 2003, Shell Scripting, IBM Utilities, Sun Solaris 9.0, Windows 2003/2008, Auto Sys Informatica Developer.
NIKE, Inc – Beaverton, OR Nov 2009 to Jan 2011
Informatica Developer
Nike, Inc. is an American multinational corporation that is engaged in the design, development, manufacturing and worldwide marketing and selling of footwear, apparel, equipment, accessories and services. The primary objective is to develop and support for the Enterprise Reporting with a focus on financial application, Partner Payment System (ICM).
Responsibilities:
- Worked on different data sources such as Oracle, Delimited and fixed–width flat files.
- Extracted data from flat files, COBOL files using normalize transformation and oracle database, applied business logic to load them in the central oracle database.
- Developed mappings/Reusable Objects/Transformation/mapplets by using mapping designer, transformation developer and mapplet designer in Informatica Power Center 8.6.1.
- Created reusable transformations and mapplets and used them in mappings.
- Used Informatica Power Center 8.6.1 for extraction, loading and transformation (ETL) of data in the data warehouse.
- Implemented the slowly changing dimensions (SCD) type1 and type2 to maintain current information and history information in the dimension tables.
- Used Informatica Power Center Workflow manager to create sessions, batches to run with the logic embedded in the mappings.
- Involved in creation of Folders, Users, Deployment Group using Repository Manager.
- Used the export and import commands in UNIX for inserting the data in to tables in DB2.
- Created complex mappings in Power Center Designer using Aggregate, Expression, Filter, and Sequence Generator, Update Strategy, SQL, Union, Lookup, Joiner, XML Source Qualifier, Unconnected lookup transformations.
- Developed UNIX Shell Scripts for scheduling the sessions in Informatica.
- Created E–mail notifications tasks using post–session scripts.
- Use the Change Data capture (CDC) option to capture the data whenever the inserts, updates, and deletes underlying these events as soon as they occur and transfer in to multiple targets without intermediate queues or staging tables.
- Provided backup support for Administration of Business Objects XI.
- Involved in generating various reports using Crystal Reports XI (BOXI) and Hyperion interactive Reporting 9.3.
- Review the Session logs, fix the data and run again with recover option or run through again the same job in Production Support.
- Administrating Hyperion performance suite 8.3 by creating new users, groups and managing users, groups by giving/modifying access privileges to the users, groups and folders
- Worked closely with the Business Users to get the requirements of the project.
- Worked with command line program pmcmd to interact with the server to start and stop sessions and batches, to stop the Informatica server and recover the sessions.
- Wrote SQL, PL/SQL, stored procedures & triggers, cursors for implementing business needs.
- Created procedures to drop and recreate the indexes in the target Data warehouse before and after the sessions.
- Written documentation to describe program development, logic, coding, testing and changes made.
- Written Shell scripts to archive the files that are older than a year and modify the file names according to the end user defined names and also combat the multiple source files into single large file
Environment: Informatica Power Center 8.6.1, Oracle 10g, DB2, SQL Server 2005/2008, PL/SQL, XML, Toad, MS Access, Windows 2003, UNIX, AutoSysM Hyperion Performance Suite 8.3, Hyperion Interactive reporting 9.3, Business Objects Crystal reports XI.
iLabs, Hyderabad, India June 2007 – Oct 2009
ETL/SQL Developer
Responsibilities:
- Analyzed business requirements and worked closely with various application teams and business teams to develop ETL procedures that are consistent across all application and systems.
- Documentation of technical specification, business requirements, functional specifications for the development of Informatica mappings to load data into various tables and defining ETL standards.
- Installed and configured Informatica Server and Power Center 7.2. Migrated the Metadata changes to the Informatica repository.
- Responsible for Data Import/Export, Data Conversions and Data Cleansing.
- Created Informatica mappings with SQL procedures to build business rules to load data.
- Worked on Informatica Power Center 7.1.3 tool – Source Analyzer, warehouse designer, Mapping Designer &Mapplets, and Transformations.
- Created and configured Workflows, Worklets, and Sessions to transport the data to target using Informatica Workflow Manager.
- Extensively involved in performance tuning at source, target, mapping, session and system levels by analyzing the reject data.
- Maintained Development, Test and Production mapping migration using Repository Manager.
- Extensively performed unit testing and system or integration testing.
- Generated PL/SQL scripts and UNIX Shell scripts for automated daily load processes.
- Extensively worked in Oracle SQL Query performance tuning and created DDLs, database objects like Tables, Indexes and Sequences etc., by working closely with DBAs.
- Developed several forms and reports in the process. Also converted several standalone procedures/functions in PL/SQL to packaged procedure for code reusability, modularity and control.
- Designed tables, indexes and constraints using TOAD and loaded data into the database using SQL*Loader.
- Tuning of Informatica jobs with oracle as backend database
Environment: Informatica 7.1.3, Business Objects, Oracle 8.1.7.4, SQL*Plus, PL/SQL, TOAD, 7.1, UNIX, Windows XP.