자바를 사용하여 BigQuery에 Array <T> 저장

Nov 26 2020

Spark Big Query 커넥터를 사용하여 데이터를 Big 쿼리에 저장하려고합니다. 아래와 같은 Java pojo가 있다고 가정 해 보겠습니다.

@Getter
@Setter
@AllArgsConstructor
@ToString
@Builder
public class TagList {
    private String s1;
    private List<String> s2;
}

이제이 Pojo를 Big 쿼리에 저장하려고하면 오류가 발생합니다.

Caused by: com.google.cloud.spark.bigquery.repackaged.com.google.cloud.bigquery.BigQueryException: Failed to load to test_table1 in job JobId{project=<project_id>, job=<job_id>, location=US}. BigQuery error was Provided Schema does not match Table <Table_Name>. Field s2 has changed type from STRING to RECORD
    at com.google.cloud.spark.bigquery.BigQueryWriteHelper.loadDataToBigQuery(BigQueryWriteHelper.scala:156)
    at com.google.cloud.spark.bigquery.BigQueryWriteHelper.writeDataFrameToBigQuery(BigQueryWriteHelper.scala:89)
    ... 35 more

샘플 코드 :

Dataset<TagList> mapDS = inputDS.map((MapFunction<Row, TagList>) x -> {
                List<String> list = new ArrayList<>();
                list.add(x.get(0).toString());
                list.add("temp1");
return TagList.builder()
                    .s1("Hello World")
                    .s2(list).build();
        }, Encoders.bean(TagList.class));

        mapDS.write().format("bigquery")
                .option("temporaryGcsBucket","<bucket_name>")
                .option("table", "<table_name>")
                .option("project", projectId)
                .option("parentProject", projectId)
                .mode(SaveMode.Append)
                .save();

Big Query 표 :

create table <dataset>.<table_name> (
  s1 string,
  s2 array<string>,
  )
  PARTITION BY
  TIMESTAMP_TRUNC(_PARTITIONTIME, HOUR);

답변

DavidRabinowitz Nov 30 2020 at 18:59

mediumFormat을 AVRO 또는 ORC로 변경하십시오. Parquet를 사용할 때 직렬화는 중간 구조를 만듭니다. 더보기https://github.com/GoogleCloudDataproc/spark-bigquery-connector#properties