2022-04-07 11:23:01 +01:00
|
|
|
# bq
|
|
|
|
|
2024-02-14 20:25:13 +00:00
|
|
|
> A Python-based tool for BigQuery, Google Cloud's fully managed and completely serverless enterprise data warehouse.
|
2022-04-07 11:23:01 +01:00
|
|
|
> More information: <https://cloud.google.com/bigquery/docs/reference/bq-cli-reference>.
|
|
|
|
|
|
|
|
- Run query against a BigQuery table using standard SQL, add `--dry_run` flag to estimate the number of bytes read by the query:
|
|
|
|
|
|
|
|
`bq query --nouse_legacy_sql 'SELECT COUNT(*) FROM {{DATASET_NAME}}.{{TABLE_NAME}}'`
|
|
|
|
|
|
|
|
- Run a parameterized query:
|
|
|
|
|
|
|
|
`bq query --use_legacy_sql=false --parameter='ts_value:TIMESTAMP:2016-12-07 08:00:00' 'SELECT TIMESTAMP_ADD(@ts_value, INTERVAL 1 HOUR)'`
|
|
|
|
|
|
|
|
- Create a new dataset or table in the US location:
|
|
|
|
|
|
|
|
`bq mk --location=US {{dataset_name}}.{{table_name}}`
|
|
|
|
|
|
|
|
- List all datasets in a project:
|
|
|
|
|
|
|
|
`bq ls --filter labels.{{key}}:{{value}} --max_results {{integer}} --format=prettyjson --project_id {{project_id}}`
|
|
|
|
|
|
|
|
- Batch load data from a specific file in formats such as CSV, JSON, Parquet, and Avro to a table:
|
|
|
|
|
2024-04-18 19:38:25 +01:00
|
|
|
`bq load --location {{location}} --source_format {{CSV|JSON|PARQUET|AVRO}} {{dataset}}.{{table}} {{path_to_source}}`
|
2022-04-07 11:23:01 +01:00
|
|
|
|
|
|
|
- Copy one table to another:
|
|
|
|
|
|
|
|
`bq cp {{dataset}}.{{OLD_TABLE}} {{dataset}}.{{new_table}}`
|
|
|
|
|
2024-01-22 06:03:54 +00:00
|
|
|
- Display help:
|
2022-04-07 11:23:01 +01:00
|
|
|
|
|
|
|
`bq help`
|