Data Modeler for Life-insurance client

Duties and Responsibilities:

  • Understand and translate business needs into data models supporting long-term solutions.
  • Work with the Application Development team to implement data strategies, build data flows and develop data models.
  • Create logical and physical data models using best practices to ensure high data quality and reduced redundancy.
  • Optimize and update logical and physical data models to support new and existing projects.
  • Maintain logical and physical data models along with corresponding metadata.
  • Develop the data model following the best practices for standard naming conventions and practices to ensure consistency of data models as per the company standards.
  • Recommend opportunities for reuse of data models in new environments/projects.
  • Evaluate data models and physical databases for variances and discrepancies.
  • Validate business data objects for accuracy and completeness.
  • Analyze data-related system integration challenges and propose appropriate solutions.
  • Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, performance requirements and interfaces.
  • Review modifications to existing databases to improve efficiency and performance.
  • Past experience on ACORD Reference Architecture in an enterprise architecture framework for the insurance industry.
  • At least one implementation of converting logical data models to columnar schemas in HBase or similar setup to enable data processing and querying patterns.
  • Discover metadata of the source database, including value patterns and distributions, key candidates, foreign-key candidates, and functional dependencies

Requirements and Qualifications:

Required skills: 

  • Strong data modeling skills required using Erwin.
  • Experienced working on HDFS, HBase, Solr and RDBMS (Oracle/SQL Server).
  • Data structures and data management knowledge and proven experience.
  • Experienced in SQL and NoSQL databases.
  • Extended knowledge of Hadoop ecosystem and Spark is a plus.
  • Exposure to new platforms and adapt to a continuous evolving ecosystem of multiple moving components involving Hive, HBase, Zeppelin, Spark, Livy, Solr, QlikSense, PowerBI and other BI tools
  • Basic engineering skills such as background in mathematics, statistics. Analytical thinking (able to drive reporting and analytical requests).
  • Knowledge on graph theory and graph databases is a plus.
  • Excellent communication skills in Japanese, business level of English.

Focus:Very High Performance, Target Data Hub in Hadoop with standardized incremental ingestion, transformation and consumption layer.


  • Patient and verbally apt, specific to senior business users.
  • Patient and verbally apt, specific to senior business users.
  • Able to understand non-technical specification and transform to IT requirements.
  • Able to adapt to users’ needs but also assertive enough to navigate complex and demanding continuously growing environment.
  • Good prioritization skills and basic project management skill. Able to absorb and handle multiple requests at the same time






Experienced (non-manager)


Bachelor's degree or equivalent






IT Solutions