You are on page 1of 47

 

  

Amazon Redshift to BigQuery 


SQL translation reference 
   

 
 
 
 
 
About this SQL translation reference 3 

Data types 3 
Implicit conversion types 5 
Explicit conversion types 5 

Query syntax 5 
SELECT statement 6 
FROM clause 6 
JOIN types 7 
WITH clause 7 
Set operators 7 
ORDER BY clause 8 
Conditions 9 
Functions 10 
Aggregate functions 10 
Bitwise aggregate functions 11 
Window functions 11 
Conditional expressions 15 
Date and time functions 16 
Mathematical operators 22 
Math functions 24 
String functions 25 
Data type formatting functions 30 

DML syntax 31 


INSERT statement 31 
COPY statement 31 
UPDATE statement 32 
DELETE, TRUNCATE statements 32 
MERGE statement 33 
Merge operation by replacing existing rows 33 

DDL syntax 35 


SELECT INTO statement 35 
CREATE TABLE statement 35 
Temporary tables 37 
CREATE VIEW statement 37 

User-defined functions (UDFs) 38 

 
 

 
CREATE FUNCTION syntax 38 
DROP FUNCTION syntax 40 
UDF components 41 

Metadata and transaction SQL statements 43 


Multi-statement and multi-line SQL statements 43 

Procedural SQL statements 44 


CREATE PROCEDURE statement 44 
Variable declaration and assignment 44 
Error condition handlers 44 
Cursor declarations and operations 45 
Dynamic SQL statements 45 
Flow-of-control statements 45 

Consistency guarantees and transaction isolation 46 


Transactions 46 
Rollback 46 

Database limits 46 


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 

About this SQL translation reference 


This document details the similarities and differences in SQL syntax between Redshift and 
BigQuery. The document can help accelerate the planning and execution of moving your 
enterprise data warehouse (EDW) to BigQuery. Redshift data warehousing is designed to work 
with Redshift-specific SQL syntax. Scripts written for Redshift might need to be altered 
before you can use them in BigQuery because the SQL dialects vary between the services. 
 
Note:​ In some cases, there is no direct mapping between a SQL element in Redshift and 
BigQuery. However, in most cases, you can achieve the same functionality in BigQuery that 
you can in Redshift using alternative means, as shown in the examples in this document. 
 
This document is part of a series that discusses migrating data from Redshift to BigQuery. It is 
a companion to the following document: 

● Amazon Redshift to BigQuery migration guide 


 

Highlights 

Purpose  To detail common similarities and differences in SQL syntax 


between Redshift and BigQuery and help accelerate the planning 
and execution of moving your enterprise data warehouse (EDW) to 
BigQuery.  

Intended audience  Enterprise architects, DBAs, application developers, and IT 


security. 

Key assumptions  That the audience is familiar with Redshift and is looking for 
guidance on transitioning to BigQuery. 

Data types 
This section shows equivalents between data types in Redshift and in BigQuery.  
 
Redshift  BigQuery  Notes 
Data type  Alias 
SMALLINT  INT2  INT64  Redshift’s S
​ MALLINT​is 2 bytes, whereas 
BigQuery’s I ​ NT64​is 8 bytes. 

 
 

 
INTEGER  INT​
,  INT64  Redshift’s I
​ NTEGER​is 4 bytes, whereas 
INT4  BigQuery’s I ​ NT64​is 8 bytes. 
BIGINT  INT8  INT64  Both Redshift’s B
​ IGINT​and BigQuery’s ​INT64 
are 8 bytes. 
DECIMAL  NUMERIC  NUMERIC   

REAL  FLOAT4  FLOAT64  Redshift’s R


​ EAL​is 4 bytes, whereas BigQuery’s 
FLOAT64​is 8 bytes. 
DOUBLE  FLOAT8​,  FLOAT64 
 
PRECISION  FLOAT 
BOOLEAN  BOOL  BOOL  Redshift’s B
​ OOLEAN​can use ​TRUE​ , ​t​
,t
​ rue​,y
​ ​
, ​yes​

and ​1​as valid literal values for t​ rue​. BigQuery’s 
BOOL​data type uses case-insensitive T ​ RUE​. 
CHAR  CHARACTER​
,  STRING 
NCHAR​
,   
BPCHAR 
VARCHAR  CHARACTER VARYING​
, STRING 
NVARCHAR​
,   
TEXT 
DATE    DATE   
TIMESTAMP  TIMESTAMP WITHOUT  DATETIME 
 
TIME ZONE 
TIMESTAMPTZ  TIMESTAMP WITH  TIMESTAMP  Note:​ In BigQuery, ​time zones​ are used when 
TIME ZONE  parsing timestamps or formatting timestamps 
for display. A string-formatted timestamp 
might include a time zone, but when BigQuery 
parses the string, it stores the timestamp in the 
equivalent UTC time. When a time zone is not 
explicitly specified, the default time zone, UTC, 
is used. ​Time zone names​ or ​offset from UTC 
using (​ -|+)HH:MM​are supported, but time 
zone abbreviations such as PDT are not 
supported.  
GEOMETRY    GEOGRAPHY  Support for querying geospatial data. 
 
BigQuery also has the following data types that do not have a direct Redshift analog: 

● ARRAY 
● BYTES 
● TIME 
● STRUCT 
 
 
 

 

Implicit conversion types 


When migrating to BigQuery, you need to convert most of your ​Redshift implicit conversions 
to BigQuery’s explicit conversions except for the following data types, which BigQuery 
implicitly converts. 
 
BigQuery performs implicit conversions for the following d​ ata types​: 
 
From BigQuery type  To BigQuery type 
INT64  FLOAT64 
INT64  NUMERIC 
NUMERIC  FLOAT64 
 
BigQuery also performs implicit conversions for the following l​ iterals​: 
 
From BigQuery type  To BigQuery type 
STRING literal   DATE 
(e.g. "2008-12-25") 
STRING literal  TIMESTAMP 
(e.g. "2008-12-25 15:30:00") 
STRING literal  DATETIME 
(e.g. "2008-12-25T07:30:00") 
STRING literal  TIME 
(e.g. "15:30:00”) 
 

Explicit conversion types 


You can convert Redshift data types that BigQuery doesn’t implicitly convert using 
​ AST(expression AS type)​function or any of the ​DATE​and ​TIMESTAMP 
BigQuery’s C
conversion functions​. 
 
When migrating your queries, change any occurrences of the Redshift C ​ ONVERT(type, 
expression)​function (or the : ​ :​syntax) to BigQuery’s C
​ AST(expression AS type)​function, 
as shown in the table in the D
​ ata type formatting functions section​.  

Query syntax 
This section addresses differences in query syntax between Redshift and BigQuery. 

 
 

 

SELECT statement 
Most R ​ edshift S
​ ELECT​statements​ are compatible with BigQuery. The following table contains 
a list of minor differences.  
 
Redshift  BigQuery 
SELECT TOP number expression  SELECT expression 
FROM table  FROM table 
ORDER BY expression DESC 
LIMIT number 
SELECT  SELECT 
x/total AS probability,   x/total AS probability,  
ROUND(100 * probability, 1) AS pct  ROUND(100 * (x/total), 1) AS pct 
FROM raw_data  FROM raw_data 
 
Note:​ Redshift supports creating and 
referencing an alias in the same ​SELECT 
statement. 
 
BigQuery also supports the following expressions in S
​ ELECT​statements, which do not have a 
Redshift equivalent: 

● EXCEPT 
● REPLACE 

FROM clause 
A ​FROM​clause​ in a query lists the table references that data is selected from. In Redshift, 
possible table references include tables, views, and subqueries. All of these table references 
are supported in BigQuery.  
 
BigQuery tables can be referenced in the ​FROM​clause using the following: 

● [project_id].[dataset_id].[table_name] 
● [dataset_id].[table_name] 
● [table_name] 
 
BigQuery also supports a
​ dditional table references​: 

● Historical versions of the table definition and rows using F ​ OR SYSTEM_TIME AS OF​ . 
● Field paths​, or any path that resolves to a field within a data type (such as a S
​ TRUCT​
). 
● Flattened arrays​. 
 

 
 

 

JOIN types 
Both Redshift and BigQuery support the following types of join: 

● [INNER] JOIN 
● LEFT [OUTER] JOIN 
● RIGHT [OUTER] JOIN 
● FULL [OUTER] JOIN 
● CROSS JOIN​and the equivalent i​ mplicit comma cross join 
 
The following table contains a list of minor differences.  
 
Redshift  BigQuery 
SELECT col1   SELECT col1 
FROM table1  FROM table1 
NATURAL INNER JOIN  INNER JOIN 
table2  table2 
USING (col1, col2 [, ...]) 
 
Note​: In BigQuery, J
​ OIN​clauses require a J ​ OIN 
condition unless the clause is a ​CROSS JOIN​or one 
of the joined tables is a field within a data type or an 
array. 
 

WITH clause 
A ​BigQuery ​WITH​clause​ contains one or more named subqueries that execute when a 
subsequent S ​ ELECT​statement references them. R​ edshift ​WITH​clauses​ behave the same as 
BigQuery’s with the exception that you can evaluate the clause once and reuse its results. 

Set operators 
There are some minor differences between R ​ edshift set operators​ and ​BigQuery set 
operators​. However, all set operations that are feasible in Redshift are replicable in BigQuery.  
 
Redshift  BigQuery 
SELECT * FROM table1  SELECT * FROM table1 
UNION  UNION DISTINCT 
SELECT * FROM table2  SELECT * FROM table2 
 
Note:​ Both BigQuery and Redshift support the 
UNION ALL​operator. 
SELECT * FROM table1   SELECT * FROM table1 
INTERSECT  INTERSECT DISTINCT 
SELECT * FROM table2  SELECT * FROM table2 

 
 

 
SELECT * FROM table1  SELECT * FROM table1 
EXCEPT  EXCEPT DISTINCT 
SELECT * FROM table2  SELECT * FROM table2 
SELECT * FROM table1  SELECT * FROM table1 
MINUS  EXCEPT DISTINCT 
SELECT * FROM table2  SELECT * FROM table2 
SELECT * FROM table1  SELECT * FROM table1 
UNION  UNION ALL 
SELECT * FROM table2  ( 
EXCEPT  SELECT * FROM table2 
SELECT * FROM table3  EXCEPT 
SELECT * FROM table3 

 
Note​: BigQuery requires parentheses to 
separate different set operations. If the same 
set operator is repeated, parentheses are not 
necessary. 
 

ORDER BY clause 
There are some minor differences between R
​ edshift ​ORDER BY​clauses​ and ​BigQuery ​ORDER 
BY​clauses​.  
 
Redshift  BigQuery 
In Redshift, ​NULLS​are ranked  In BigQuery, N
​ ULLS​are ranked first by default (ascending 
last by default (ascending  order). 
order). 
SELECT *  SELECT * 
FROM table  FROM table 
ORDER BY expression  ORDER BY expression 
LIMIT ALL   
Note:​ BigQuery does not use the ​LIMIT ALL​syntax, but 
ORDER BY​sorts all rows by default, resulting in the same 
behavior as Redshift’s ​LIMIT ALL​clause. We highly 
recommend including a ​LIMIT​clause with every O ​ RDER BY 
clause. Ordering all result rows unnecessarily degrades query 
execution performance. 
SELECT *  SELECT * 
FROM table  FROM table 
ORDER BY expression  ORDER BY expression 
OFFSET 10  LIMIT ​ count​OFFSET 10 
 
Note:​ In BigQuery, ​OFFSET​must be used together with a 
LIMIT ​ count​. Make sure to set the ​count​​INT64​value to the 

 
 

 
minimum necessary ordered rows. Ordering all result rows 
unnecessarily degrades query execution performance. 

Conditions 
The following table shows R
​ edshift conditions​, or predicates, that are specific to Redshift and 
must be converted to their BigQuery equivalent. 
 
Redshift  BigQuery 
a =
​ ANY​(subquery)  a ​
IN​subquery 
 
a =
​ SOME​(subquery) 
a <
​> ALL​(subquery)  a ​
NOT IN​subquery 
 
a != ALL (subquery) 
a I
​S UNKNOWN  a ​
IS NULL 
expression ​
ILIKE​pattern  LOWER​
(expression) ​
LIKE​​
LOWER​
(pattern) 
expression ​
LIKE​pattern ESCAPE  expression L
​IKE​pattern 
'escape_char'   
Note:​ BigQuery does not support custom escape 
characters. You must use two backslashes ​\\​as 
escape characters for BigQuery. 
expression [NOT] S
​IMILAR TO  IF( 
pattern  LENGTH( 
REGEXP_REPLACE( 
expression, 
pattern, 
'' 
) = 0, 
True, 
False 

 
Note:​ If ​NOT​is specified, wrap the above I
​ F 
expression in a ​NOT​expression as shown below: 
 
NOT​(  
IF(  
LENGTH(...  

expression ​
[!] ~​pattern  [​
NOT​] ​
REGEXP_CONTAINS​

expression, 
regex 

 

 
 

 

Functions 
The following sections list Redshift functions and their BigQuery equivalents. 
 
Aggregate functions 
The following table shows mappings between common Redshift aggregate, aggregate 
analytic, and approximate aggregate functions with their BigQuery equivalents. 
 
Redshift  BigQuery 
APPROXIMATE COUNT​
(DISTINCT expression) APPROX_COUNT_DISTINCT​
(expression) 
APPROXIMATE PERCENTILE_DISC​
(  APPROX_QUANTILES​
(expression, 
percentile  100)[OFFSET(CAST(TRUNC(percentile * 
) WITHIN GROUP (ORDER BY expression)  100) as INT64))] 
AVG​
([DISTINCT] expression)  AVG​
([DISTINCT] expression) 
COUNT​
(expression)  COUNT​
(expression) 
LISTAGG​
(  STRING_AGG​

[DISTINCT] aggregate_expression  [DISTINCT] aggregate_expression 
[, delimiter]  [, delimiter] 
) [WITHIN GROUP (ORDER BY order_list)]  [ORDER BY order_list] 
)  
MAX​
(expression)  MAX​
(expression) 
MEDIAN​
(median_expression)  PERCENTILE_CONT​

median_expression, 
0.5 
) OVER() 
MIN​
(expression)  MIN​
(expression) 
PERCENTILE_CONT​
(  PERCENTILE_CONT​

percentile  median_expression, 
) WITHIN GROUP (ORDER BY expression)  percentile 
) OVER() 
 
Note: D
​ oes not cover aggregation use cases. 
STDDEV​
([DISTINCT] expression)  STDDEV​
([DISTINCT] expression) 
STDDEV_SAMP​
([DISTINCT] expression)  STDDEV_SAMP​
([DISTINCT] expression) 
STDDEV_POP​
([DISTINCT] expression)  STDDEV_POP​
([DISTINCT] expression) 
SUM​
([DISTINCT] expression)  SUM​
([DISTINCT] expression) 
VARIANCE​
([DISTINCT] expression)  VARIANCE​
([DISTINCT] expression) 
VAR_SAMP​
([DISTINCT] expression)  VAR_SAMP​
([DISTINCT] expression) 
VAR_POP​
([DISTINCT] expression)  VAR_POP​
([DISTINCT] expression) 
 
 
10 
 
 
BigQuery also offers the following a
​ ggregate​, a
​ ggregate analytic​, and a
​ pproximate aggregate 
functions, which do not have a direct analogue in Redshift: 

● ANY_VALUE 
● APPROX_TOP_COUNT 
● APPROX_TOP_SUM 
● ARRAY_AGG 
● ARRAY_CONCAT_AGG 
● COUNTIF 
● CORR 
● COVAR_POP 
● COVAR_SAMP 
 
Bitwise aggregate functions 
The following table shows mappings between common Redshift bitwise aggregate functions 
with their BigQuery equivalents. 
 
Redshift  BigQuery 
BIT_AND​
(expression)  BIT_ADD​
(expression) 
BIT_OR​
(expression)  BIT_OR​
(expression) 
BOOL_AND​
(expression)  LOGICAL_AND​
(expression) 
BOOL_OR​
(expression)  LOGICAL_OR​
(expression) 
 
BigQuery also offers the following b
​ it-wise aggregate​ function, which does not have a direct 
analogue in Redshift: 

● BIT_XOR 
 
Window functions 
The following table shows mappings between common Redshift window functions with their 
BigQuery equivalents. Windowing functions in BigQuery include a ​ nalytic aggregate functions​, 
aggregate functions​, n
​ avigation functions​, and n
​ umbering functions​. 
 
Redshift  BigQuery 
AVG​(expression) OVER  AVG​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause] 
)  ) 

 
 
11 
 
COUNT​(expression) OVER  COUNT​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause] 
)  ) 
CUME_DIST​() OVER  CUME_DIST​() OVER 
(  ( 
[PARTITION BY partition_expression]  [PARTITION BY partition_expression] 
[ORDER BY order_list]  ORDER BY order_list 
)  ) 
DENSE_RANK​() OVER  DENSE_RANK​() OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list]  ORDER BY order_list 
)  ) 

FIRST_VALUE​(expression) OVER  FIRST_VALUE​(expression) OVER 


(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause] 
)  ) 

LAST_VALUE​(expression) OVER  LAST_VALUE​(expression) OVER 


(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list 
frame_clause]  frame_clause] 
)  ) 

LAG​(value_expr [, offset]) OVER  LAG​(value_expr [, offset]) OVER 


(  ( 
[PARTITION BY window_partition]  [PARTITION BY window_partition] 
ORDER BY window_ordering  ORDER BY window_ordering 
)  ) 

LEAD​(value_expr [, offset]) OVER  LEAD​(value_expr [, offset]) OVER 


(  ( 
[PARTITION BY window_partition]  [PARTITION BY window_partition] 
ORDER BY window_ordering  ORDER BY window_ordering 
)  ) 
LISTAGG​
(  STRING_AGG​

[DISTINCT] expression  [DISTINCT] aggregate_expression 
[, delimiter]  [, delimiter] 

 
 
12 
 
) [WITHIN GROUP (ORDER BY order_list)] ) OVER  
OVER  ( 
(  [PARTITION BY partition_list] 
[PARTITION BY partition_expression]  [ORDER BY order_list] 
)  ) 
MAX​(expression) OVER  MAX​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause] 
)  ) 
MEDIAN​(median_expression) OVER  PERCENTILE_CONT​( 
(  median_expression, 
[PARTITION BY partition_expression]  0.5 
)  ) OVER 

[PARTITION BY partition_expression] 

MIN​(expression) OVER  MIN​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause] 
)  ) 

NTH_VALUE​(expression, offset) OVER  NTH_VALUE​(expression, offset) OVER 


(  ( 
[PARTITION BY window_partition]  [PARTITION BY window_partition] 
[ORDER BY window_ordering   ORDER BY window_ordering 
frame_clause]  [frame_clause] 
)  ) 

NTILE​(expr) OVER  NTILE​(expr) OVER 


(  ( 
[PARTITION BY expression_list]  [PARTITION BY expression_list] 
[ORDER BY order_list]   ORDER BY order_list 
)  ) 
PERCENT_RANK​() OVER  PERCENT_RANK​() OVER 
(  ( 
[PARTITION BY partition_expression]  [PARTITION BY partition_expression] 
[ORDER BY order_list]   ORDER BY order_list 
)  ) 

 
 
13 
 
PERCENTILE_CONT​(percentile)  PERCENTILE_CONT​(expr, percentile) OVER 
WITHIN GROUP (ORDER BY expr) OVER  ( 
(  [PARTITION BY expr_list] 
[PARTITION BY expr_list]  ) 

PERCENTILE_DISC​(percentile)  PERCENTILE_DISC​(expr, percentile) OVER 
WITHIN GROUP (ORDER BY expr) OVER  ( 
(  [PARTITION BY expr_list] 
[PARTITION BY expr_list]  ) 

RANK​() OVER  RANK​() OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list]  ORDER BY order_list 
)  ) 

RATIO_TO_REPORT​(ratio_expression) OVER ratio_expression/​SUM​
(ratio_expression) 
(  OVER 
[PARTITION BY partition_expression]  ( 
)  [PARTITION BY partition_expression] 

ROW_NUMBER​() OVER  ROW_NUMBER​() OVER 


(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list]  [ORDER BY order_list] 
)  ) 

STDDEV​(expression) OVER  STDDEV​(expression) OVER 


(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 
STDDEV_SAMP​(expression) OVER  STDDEV_SAMP​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 

 
 
14 
 
STDDEV_POP​(expression) OVER  STDDEV_POP​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 
SUM​(expression) OVER  SUM​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 
VAR_SAMP​(expression) OVER  VAR_SAMP​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  )  
VAR_POP​(expression) OVER  VAR_POP​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 
VARIANCE​(expression) OVER  VARIANCE​(expression) OVER 
(  ( 
[PARTITION BY expr_list]  [PARTITION BY expr_list] 
[ORDER BY order_list  [ORDER BY order_list] 
frame_clause]  [frame_clause]  
)  ) 
 
Conditional expressions 
The following table shows mappings between common Redshift conditional expressions with 
their BigQuery equivalents. 

Redshift  BigQuery 
CASE​expression  CASE​expression 
WHEN value THEN result  WHEN value THEN result 
[WHEN...]  [WHEN...] 
[ELSE else_result]  [ELSE else_result] 
END  END 
COALESCE​
(expression1[, ...])  COALESCE​
(expression1[, ...]) 
DECODE​
(  CASE​expression 

 
 
15 
 
expression,  WHEN value1 THEN result1 
search1, result1  [WHEN value2 THEN result2] 
[, search2, result2...]  [ELSE default] 
[, default]  END 

GREATEST​
(value [, ...])  GREATEST​
(value [, ...]) 
LEAST​
(value [, ...])  LEAST​
(value [, ...]) 
NVL​
(expression1[, ...])  COALESCE​
(expression1[, ...]) 
NVL2​(  IF​( 
expression,  expression IS NULL, 
not_null_return_value,  null_return_value, 
null_return_value  not_null_return_value 
)  ) 
NULLIF​
(expression1, expression2)  NULLIF​
(expression1, expression2) 
 
BigQuery also offers the following conditional expressions, which do not have a direct 
analogue in Redshift: 

● IF 
● IFNULL 
 
Date and time functions 
The following table shows mappings between common Redshift date and time functions with 
their BigQuery equivalents. BigQuery data and time functions include d
​ ate functions​, ​datetime 
functions​, t​ ime functions​, and t​ imestamp functions​.  
 
Keep in mind that functions that seem identical in Redshift and BigQuery might return 
different data types. 

Redshift  BigQuery 
ADD_MONTHS​
(date, integer)  CAST​( 

DATE_ADD​( 
date, 
INTERVAL integer MONTH 
) AS TIMESTAMP 

 
 
16 
 
timestamptz_or_timestamp​A
​T TIME  PARSE_TIMESTAMP​ ( 
ZONE​timezone  "%c%z", 

FORMAT_TIMESTAMP​( 
"%c%z", 

timestamptz_or_timestamp​


timezone 


 
Note:​ ​Time zones​ are used when parsing timestamps or 
formatting timestamps for display. A string-formatted 
timestamp might include a time zone, but when 
BigQuery parses the string, it stores the timestamp in 
the equivalent UTC time. When a time zone is not 
explicitly specified, the default time zone, UTC, is used. 
Time zone names​ or ​offset from UTC​ (​-HH:MM​ ) are 
supported, but time zone abbreviations (such as PDT) 
are not supported.  

CONVERT_TIMEZONE​(  PARSE_TIMESTAMP​ ( 
[source_timezone],  "%c%z", 
target_timezone,  ​
FORMAT_TIMESTAMP​ ( 
timestamp  "%c%z", 
)  timestamp, 
target_timezone 


 
Note:​ ​source_timezone​is UTC in BigQuery. 
CURRENT_DATE  CURRENT_DATE​
() 
   
Note:​ Returns start date for the current  Note:​ Returns start date for the current statement in 
transaction in the current session time  UTC time zone. 
zone (UTC by default). 
DATE_CMP​
(date1, date2)  CASE 
WHEN date1 = date2 THEN 0 
WHEN date1 > date2 THEN 1 
ELSE -1 
END 

 
 
17 
 
DATE_CMP_TIMESTAMP​
(date1, date2)  CASE 
WHEN date1 = CAST(date2 AS DATE)   
THEN 0 
WHEN date1 > CAST(date2 AS DATE) 
THEN 1 
ELSE -1 
END 

DATE_CMP_TIMESTAMPTZ​
(date, timestamptz)  CASE WHEN date > ​
DATE​
(timestamptz) THEN 1 
WHEN date < D
​ATE​
(timestamptz) THEN -1 
ELSE 0  
END 

DATE_PART_YEAR​
(date)  EXTRACT​
(YEAR FROM date) 

DATEADD​
(date_part, interval, date)  CAST​( 

DATE_ADD​( 
date, 
INTERVAL interval datepart 
) AS TIMESTAMP 

DATEDIFF​(  DATE_DIFF​( 
date_part,  date_expression1, 
date_expression1,  date_expression2, 
date_expression2  date_part 
)  ) 

DATE_PART​
(date_part, date)  EXTRACT​
(date_part FROM date) 

DATE_TRUNC​
('date_part', timestamp)  TIMESTAMP_TRUNC​
(timestamp, date_part) 

EXTRACT​
(date_part FROM timestamp)  EXTRACT​
(date_part FROM timestamp) 

GETDATE​
()  PARSE_TIMESTAMP( 
"%c", 
FORMAT_TIMESTAMP( 
"%c", 

CURRENT_TIMESTAMP​
() 

INTERVAL_CMP​
(interval_literal1,  For intervals in Redshift, there are 360 days in a year. In 
interval_literal2)  BigQuery, you can use the following UDF to parse a 
Redshift interval and translate it to seconds. 
 
CREATE TEMP FUNCTION 
parse_interval(interval_literal STRING) AS ( 
 
 
18 
 
(select sum(case 
when unit in ('minutes', 'minute', 'm' ) 
then num * 60 
when unit in ('hours', 'hour', 'h') then num 
* 60 * 60 
when unit in ('days', 'day', 'd' ) then num 
* 60 * 60 * 24 
when unit in ('weeks', 'week', 'w') then num 
* 60 * 60 * 24 * 7 
when unit in ('months', 'month' ) then num * 
60 * 60 * 24 * 30  
when unit in ('years', 'year') then num * 60 
* 60 * 24 * 360 
else num 
end) 
from ( 
select 
cast(regexp_extract(value, 
r'^[0-9]*\.?[0-9]+') as numeric) num, 
substr(value, length(regexp_extract(value, 
r'^[0-9]*\.?[0-9]+')) + 1) unit 
from  

UNNEST​(​
SPLIT​
(replace(interval_literal, ' 
', ''), ',')) value 
))); 
 
To compare interval literals, perform: 
IF​( 
parse_interval(interval_literal1) > 
parse_interval(interval_literal2),  
1,  

IF​( 
parse_interval(interval_literal1) > 
parse_interval(interval_literal2),  
-1,  



LAST_DAY​
(date)  DATE_SUB​( 

DATE_ADD​( 
date, 
INTERVAL 1 MONTH 
), 
INTERVAL 1 DAY 

 
 
19 
 
MONTHS_BETWEEN​
(date1, date2)  DATE_DIFF​( 
date1, 
date2, 
MONTH 

NEXT_DAY​
(date, day)  DATE_ADD​( 

DATE_TRUNC​

date, 
WEEK(day) 
), 
INTERVAL 1 WEEK 

SYSDATE  CURRENT_TIMESTAMP​
() 
   
Note:​ Returns start timestamp for the  Note:​ Returns start timestamp for the current 
current transaction in the current session  statement in UTC time zone. 
time zone (UTC by default).   
TIMEOFDAY​
()  FORMAT_TIMESTAMP​
("%a %b %d %H:%M:%E6S %E4Y 
%Z", ​
CURRENT_TIMESTAMP​
()) 
TIMESTAMP_CMP​ (  CASE 
timestamp1,  WHEN timestamp1 = timestamp2 
timestamp2  THEN 0 
)  WHEN timestamp1 > timestamp2 
THEN 1 
ELSE -1 
END 
TIMESTAMP_CMP_DATE​
(  CASE 
timestamp,  WHEN E​XTRACT​
(DATE FROM timestamp) = date 
date  THEN 0 
)  WHEN E​XTRACT​
(DATE FROM timestamp) > date 
THEN 1 
ELSE -1 
END 
TIMESTAMP_CMP_TIMESTAMPTZ​
(  CASE 
timestamp,  WHEN timestamp = timestamptz 
timestamptz  THEN 0 
)  WHEN timestamp > timestamptz 
  THEN 1 
Note: R
​ edshift compares timestamps in  ELSE -1 
the user session defined time zone. Default  END 
user session time zone is UTC.   
Note:​ BigQuery compares timestamps in UTC time 
zone. 
TIMESTAMPTZ_CMP​
(  CASE 

 
 
20 
 
timestamptz1,  WHEN timestamptz1 = timestamptz2 
timestamptz2  THEN 0 
)  WHEN timestamptz1 > timestamptz2 
  THEN 1 
Note: R
​ edshift compares timestamps in  ELSE -1 
the user session defined time zone. Default  END 
user session time zone is UTC.   
Note:​ BigQuery compares timestamps in UTC time 
zone. 
TIMESTAMPTZ_CMP_DATE​
(  CASE 
timestamptz,  WHEN E​XTRACT​
(DATE FROM timestamptz) = date 
date  THEN 0 
)  WHEN E​XTRACT​
(DATE FROM timestamptz) > date 
  THEN 1 
Note: R
​ edshift compares timestamps in  ELSE -1 
the user session defined time zone. Default  END 
user session time zone is UTC.   
Note:​ BigQuery compares timestamps in UTC time 
zone. 
TIMESTAMPTZ_CMP_TIMESTAMP​
(  CASE 
timestamptz,  WHEN timestamp = timestamptz 
Timestamp  THEN 0 
)  WHEN timestamp > timestamptz 
  THEN 1 
Note: R
​ edshift compares timestamps in  ELSE -1 
the user session defined time zone. Default  END 
user session time zone is UTC.   
Note:​ BigQuery compares timestamps in UTC time 
zone. 
TIMEZONE​(  PARSE_TIMESTAMP​ ( 
timezone,  "%c%z", 
T
​imestamptz_or_timestamp  ​
FORMAT_TIMESTAMP​( 
)  "%c%z", 

timestamptz_or_timestamp​


timezone 


 
Note:​ ​Time zones​ are used when parsing timestamps or 
formatting timestamps for display. A string-formatted 
timestamp might include a time zone, but when 
BigQuery parses the string, it stores the timestamp in 
the equivalent UTC time. When a time zone is not 
explicitly specified, the default time zone, UTC, is used. 
Time zone names​ or ​offset from UTC​ (​-HH:MM​ ) are 

 
 
21 
 
supported but time zone abbreviations (such as PDT) 
are not supported.  
PARSE_TIMESTAMP​ ( 

format​, 

FORMAT_TIMESTAMP​( 

format​, 

timestamp 


 
Note:​ BigQuery follows a different set of ​format 
TO_TIMESTAMP​
(timestamp, format)  elements​. ​Time zones​ are used when parsing 
timestamps or formatting timestamps for display. A 
string-formatted timestamp might include a time zone, 
but when BigQuery parses the string, it stores the 
timestamp in the equivalent UTC time. When a time 
zone is not explicitly specified, the default time zone, 
UTC, is used. ​Time zone names​ or ​offset from UTC 
(​-HH:MM​
) are supported in the format string but time 
zone abbreviations (such as PDT) are not supported.  

TRUNC​
(timestamp)  CAST​
(timestamp AS DATE) 
 
BigQuery also offers the following date and time functions, which do not have a direct 
analogue in Redshift: 
● EXTRACT  ● DATETIME_ADD  ● TIME_DIFF 
● DATE  ● DATETIME_SUB  ● TIME_TRUNC 
● DATE_SUB  ● DATETIME_DIFF  ● FORMAT_TIME 
● DATE_ADD​(returning ​DATE  ● DATETIME_TRUNC  ● PARSE_TIME 
data type)  ● FORMAT_DATETIME  ● TIMESTAMP_SECONDS 
● DATE_FROM_UNIX_DATE  ● PARSE_DATETIME  ● TIMESTAMP_MILLIS 
● FORMAT_DATE  ● CURRENT_TIME  ● TIMESTAMP_MICROS 
● PARSE_DATE  ● TIME  ● UNIX_SECONDS 
● UNIX_DATE  ● TIME_ADD  ● UNIX_MILLIS 
● DATETIME  ● TIME_SUB  ● UNIX_MICROS 

 
Mathematical operators 
The following table shows mappings between common Redshift mathematical operators with 
their BigQuery equivalents.   

 
 
22 
 
 

Redshift  BigQuery 

X + Y  X + Y 
X - Y  X - Y 
X * Y  X * Y 
X / Y  If integer division: 
  CAST​
(​
FLOOR​
(X / Y) AS INT64) 
Note:​ If the operator is   
performing integer division (in  If not integer division: 
other words, if ​X​and Y
​ ​are both  CAST​
(X / Y AS INT64) 
integers), an integer is returned.   
If the operator is performing  Note:​ Division in BigQuery returns a non-integer. 
non-integer division, a   
non-integer is returned.  To prevent errors from a division operation (division by zero 
error), use S
​ AFE_DIVIDE​(X, Y)​or ​IEEE_DIVIDE​ (X, Y)​ . 
X % Y  MOD​
(X, Y) 
 
Note:​ To prevent errors from a division operation (division by 
zero error), use S
​ AFE​
.​
MOD​
(X, Y)​. ​SAFE​
.​ (X, 0)​results in 0. 
MOD​

X ^ Y  POW​
(X, Y) 
 
POWER​
(X, Y) 
 
Note:​ Unlike Redshift, the ​^​operator in BigQuery performs 
Bitwise xor. 
|/ X  SQRT​
(X) 
 
Note:​ To prevent errors from a square root operation (negative 
input), use S​ AFE​.​
SQRT​ . Negative input with S
(X)​ ​ AFE​
.​ (X) 
SQRT​
results in N
​ ULL​. 

||/ X  SIGN​
(X) * P ​OWER​ (​
ABS​
(X), 1/3) 
 
Note:​ BigQuery’s ​POWER​
(X, Y)​returns an error if ​X​is a finite 
value less than 0 and Y
​ ​is a noninteger. 

@ X  ABS​
(X) 
X << Y  X ​
<<​Y 
 

 
 
23 
 
Note:​ This operator returns ​0​or a byte sequence of ​b'\x00'​if 
the second operand ​Y​is greater than or equal to the bit length of 
the first operand X
​ ​(for example, 64 if ​X​has the type I
​ NT64​
). This 
operator throws an error if Y​ ​is negative. 
X >> Y  X ​
>>​Y 

Note:​ Shifts the first operand ​X​to the right. This operator does 
not do sign bit extension with a signed type (it fills vacant bits on 
the left with ​0​
). This operator returns 0
​ ​or a byte sequence of 
b'\x00'​if the second operand ​Y​is greater than or equal to the 
bit length of the first operand X (for example, 64 if X has the type 
INT64​ ). This operator throws an error if Y is negative. 
X & Y  X ​
&​Y 
X | Y  X ​
|​Y 
~X  ~​

 
BigQuery also offers the following mathematical operator, which does not have a direct 
analog in Redshift: 

● X ^ Y​(Bitwise xor) 
 
Math functions 

Redshift  BigQuery 
ABS​
(number)  ABS​
(number) 
ACOS​
(number)  ACOS​
(number) 
ASIN​(number)  ASIN​
(number) 
ATAN​(number)  ATAN​
(number) 
ATAN2​
(number1, number2)  ATAN2​
(number1, number2) 
CBRT​
(number)  POWER​
(number, 1/3) 
CEIL​
(number)  CEIL​
(number) 
CEILING​
(number)  CEILING​
(number) 
CHECKSUM​
(expression)  FARM_FINGERPRINT​
(​
expression​) 
COS​
(number)  COS​
(number) 
COT​
(number)  1/​
TAN​
(number) 
DEGREES​
(number)  number*180/​
ACOS​
(-1) 
DEXP​
(number)  EXP​
(number) 
DLOG1​
(number)  LN​
(number) 

 
 
24 
 
DLOG10​
(number)  LOG10​
(number) 
EXP​
(number)  EXP​
(number) 
FLOOR​
(number)  FLOOR​
(number) 
LN​
(number)  LN​
(number) 
LOG​
(number)  LOG10​
(number) 
MOD​
(number1, number2)  MOD​
(number1, number2) 
PI  ACOS​
(-1) 
POWER​
(expression1, expression2)  POWER​
(expression1, expression2) 
RADIANS​
(number)  ACOS​
(-1)*(number/180) 
RANDOM()  RAND​
() 
ROUND​
(number [, integer])  ROUND​
(number [, integer]) 
SIN​
(number)  SIN​
(number) 
SIGN​
(number)  SIGN​
(number) 
SQRT​
(number)  SQRT​
(number) 
TAN​
(number)  TAN​
(number) 
TO_HEX​
(number)  FORMAT​
('%x', number) 
TRUNC​
(number [, integer])  TRUNC​
(number [, integer]) 
 
String functions 

Redshift  BigQuery 
string1 ​
||​string2  CONCAT​
(string1, string2) 
BPCHARCMP​
(string1, string2)  CASE  
WHEN string1 = string2 THEN 0 
WHEN string1 > string2 THEN 1 
ELSE -1 
END 
BTRIM​
(string [, matching_string])  TRIM​
(string [, matching_string]) 
BTTEXT_PATTERN_CMP​
(string1, string2)  CASE  
WHEN string1 = string2 THEN 0 
WHEN string1 > string2 THEN 1 
ELSE -1 
END 
CHAR_LENGTH​
(expression)  CHAR_LENGTH​
(expression) 
CHARACTER_LENGTH​
(expression)  CHARACTER_LENGTH​
(expression) 
CHARINDEX​
(substring, string)  STRPOS​
(string, substring) 
CHR​
(number)  CODE_POINTS_TO_STRING​
([number]) 

 
 
25 
 
CONCAT​
(string1, string2)  CONCAT​ (string1, string2) 
 
Note:​ BigQuery’s ​CONCAT​
(...) supports 
concatenating any number of strings. 
CRC32  Custom user-defined function 
FUNC_SHA1​
(string)  SHA1​
(string) 
INITCAP  Custom user-defined function 
LEFT​
(string, integer)  SUBSTR​
(string, 0, integer) 
RIGHT​
(string, integer)  SUBSTR​
(string, -integer) 
LEN​
(expression)  LENGTH​
(expression) 
LENGTH​
(expression)  LENGTH​
(expression) 
LOWER​
(string)  LOWER​
(string) 
LPAD​
(string1, length[, string2])  LPAD​
(string1, length[, string2]) 
RPAD​
(string1, length[, string2])  RPAD​
(string1, length[, string2]) 
LTRIM​
(string, trim_chars)  LTRIM​
(string1, trim_chars) 
MD5​
(string)  MD5​
(string) 
OCTET_LENGTH​
(expression)  BYTE_LENGTH​
(expression) 
POSITION​
(substring IN string)  STRPOS​
(string, substring) 
QUOTE_IDENT​
(string)  CONCAT​
('"',string,'"') 
QUOTE_LITERAL​
(string)  CONCAT​
("'",string,"'") 
REGEXP_COUNT​(  ARRAY_LENGTH​ ( 
source_string,  ​
REGEXP_EXTRACT_ALL​ ( 
pattern  source_string, 
[,position]  pattern 
)  ) 

 
If ​position​is specified: 
ARRAY_LENGTH​ ( 

REGEXP_EXTRACT_ALL​ ( 

SUBSTR​
(source_string, ​
IF​
(position 
<= 0, 1, position)), 
pattern 


 
Note:​ BigQuery provides regular expression 
support using the ​re2​ library; see that 
documentation for its regular expression 
syntax. 

 
 
26 
 
IFNULL​ ( 

STRPOS​ ( 
source_string,   

REGEXP_EXTRACT​ ( 
s
​ource_string, 
pattern) 
), 0) 
 
If ​position​is specified: 
IFNULL​ ( 

STRPOS​ ( 

SUBSTR​ (source_string, ​IF​(position 
<= 0, 1, position)),   

REGEXP_EXTRACT​ ( 
S
​UBSTR​ (source_string, 
IF​ (position <= 0, 1, position)), 
REGEXP_INSTR​( 
pattern) 
source_string, 
) + ​ IF​(position <= 0, 1, position) - 
pattern 
1, 0) 
[, position 
 
[, occurrence]] 
If ​occurrence​is specified: 

IFNULL​ ( 

STRPOS​ ( 

SUBSTR​ (source_string, ​
IF​(position 
<= 0, 1, position)),   

REGEXP_EXTRACT_ALL​ ( 

SUBSTR​ (source_string, 
IF​ (position <= 0, 1, position)), 
pattern 
)[​SAFE_ORDINAL​ (occurrence)] 
) + ​ IF​(position <= 0, 1, position) - 
1, 0) 
 
Note:​ BigQuery provides regular expression 
support using the ​re2​ library; see that 
documentation for its regular expression 
syntax. 
REGEXP_REPLACE​(  REGEXP_REPLACE​ ( 
source_string,  source_string, 
pattern   pattern, 
[, replace_string   “” 
[, position]]  ) 
)   
If ​replace_string​is specified: 
REGEXP_REPLACE​ ( 
source_string, 
 
 
27 
 
pattern, 
replace_string 

 
If ​position​is specified: 
CASE  
WHEN position > ​ LENGTH​
(source_string) 
THEN source_string 
WHEN position <= 0 THEN 

REGEXP_REPLACE​ ( 
source_string, 
pattern, 
“” 

ELSE 

CONCAT​ ( 

SUBSTR​ ( 
source_string, 1, position - 1), 

REGEXP_REPLACE​ ( 

SUBSTR​ (source_string, position), 
pattern, 
replace_string 


END 
 
Note:​ BigQuery provides regular expression 
support using the ​re2​ library; see that 
documentation for its regular expression 
syntax. 
REGEXP_SUBSTR​(  REGEXP_EXTRACT​ ( 
source_string,  source_string, 
pattern  pattern 
[, position  ) 
[, occurrence]]   
)  If ​position​is specified: 
REGEXP_EXTRACT​ ( 

SUBSTR​
(source_string, ​ IF​
(position <= 
0, 1, position)), 
pattern 

 
If ​occurrence​is specified: 
REGEXP_EXTRACT_ALL​ ( 

SUBSTR​
(source_string, ​ IF​
(position <= 
0, 1, position)), 
 
 
28 
 
pattern 
)[​
SAFE_ORDINAL​
(occurrence)] 
 
Note:​ BigQuery provides regular expression 
support using the ​re2​ library; see that 
documentation for its regular expression 
syntax. 
REPEAT​
(string, integer)  REPEAT​
(string, integer) 
REPLACE​
(string1, old_chars, new_chars) REPLACE​
(string1, old_chars, new_chars) 
REPLICATE​
(string, integer)  REPEAT​
(string, integer) 
REVERSE​
(expression)  REVERSE​
(expression) 
RTRIM​
(string, trim_chars)  RTRIM​
(string, trim_chars) 
SPLIT_PART​
(string, delimiter, part)  SPLIT​

string 
delimiter 
)​
SAFE_ORDINAL​
(part) 
STRPOS​
(string, substring)  STRPOS​
(string, substring) 
STRTOL​
(string, base)   
SUBSTRING​(  SUBSTR​( 
string,  string, 
start_position,  start_position, 
number_characters  number_characters 
)  ) 
TEXTLEN​
(expression)  LENGTH​
(expression) 
TRANSLATE​(  Can be implemented using UDF: 
expression,  CREATE TEMP FUNCTION 
characters_to_replace,   translate(expression STRING, 
characters_to_substitute  characters_to_replace STRING, 
)  characters_to_substitute STRING) AS ( 
IF(LENGTH(characters_to_replace) < 
LENGTH(characters_to_substitute) OR 
LENGTH(expression) < 
LENGTH(characters_to_replace), 
expression, 
(SELECT 
STRING_AGG( 
IFNULL( 
(SELECT ARRAY_CONCAT([c], 
SPLIT(characters_to_substitute, 
''))[SAFE_OFFSET(( 
SELECT IFNULL(MIN(o2) + 1, 
0) FROM 
UNNEST(SPLIT(characters_to_replace, 
 
 
29 
 
'')) AS k WITH OFFSET o2 
WHERE k = c))] 
), 
''), 
'' ORDER BY o1) 
FROM UNNEST(SPLIT(expression, '')) 
AS c WITH OFFSET o1 
)) 
); 
TRIM​
([BOTH] string)  TRIM​
(string) 
TRIM​
([BOTH] characters FROM string)  TRIM​
(string, characters) 
UPPER​
(string)  UPPER​
(string) 
 
Data type formatting functions 
Redshift  BigQuery 
CAST​
(expression AS type)  CAST​
(expression AS type) 

expression ​
::​type  CAST​
(expression AS type) 

CONVERT​
(type, expression)  CAST​
(expression AS type) 

TO_CHAR​(  FORMAT_TIMESTAMP​( 
timestamp_expression,  format, 
format  timestamp_expression 
)  ) 
 
Note:​ ​BigQuery​ and ​Redshift​ differ in how to 
specify a format string for 
timestamp_expression​ . 

TO_CHAR​(  FORMAT​
(format, numeric_expression) 
numeric_expression,   
format  Note:​ ​BigQuery​ and ​Redshift​ differ in how to 
)  specify a numeric format string. 
TO_DATE​
(date_string, format)  PARSE_DATE​
(date_string, format) 
 
Note:​ ​BigQuery​ and ​Redshift​ differ in how to 
specify a format string for d
​ ate_string​ . 
TO_NUMBER​
(string, format)  CAST​


FORMAT​

format, 
numeric_expression 
) TO INT64 

 
 
30 
 

 
Note:​ ​BigQuery​ and ​Redshift​ differ in how to 
specify a numeric format string. 
 
BigQuery also supports S​ AFE_CAST​
(expression AS typename)​ , which returns N
​ ULL​if 
BigQuery is unable to perform a cast; for example, S
​ AFE_CAST​
(“apple” AS INT64)​returns 
). 
NULL​

DML syntax 
This section addresses differences in data management language syntax between Redshift 
and BigQuery. 

INSERT statement 
Redshift offers a configurable ​DEFAULT​keyword for columns. In BigQuery, the ​DEFAULT​value 
for nullable columns is N
​ ULL​
, and D
​ EFAULT​is not supported for required columns. Most 
Redshift ​INSERT​statements​ are compatible with BigQuery. The following table shows 
exceptions. 
 
Redshift  BigQuery 
INSERT INTO table (column1 [, ...])  INSERT [INTO] table (column1 [, ...]) 
DEFAULT VALUES  VALUES (DEFAULT [, ...]) 
INSERT INTO table (column1, [,...])  INSERT [INTO] table (column1, [,...]) 
VALUES (  SELECT ... 
SELECT ...  FROM ... 
FROM ... 
)  
 
BigQuery also supports inserting values using a subquery (where one of the values is 
computed using a subquery), which is not supported in Redshift. For example: 
 
INSERT INTO table (column1, column2) 
VALUES ('value_1', ( 
SELECT column2 
FROM table2 
)) 
 

COPY statement 
Redshift’s ​COPY​command loads data into a table from data files or from an Amazon 
DynamoDB table. BigQuery does not use the SQL C ​ OPY​command to load data, but you can 

 
 
31 
 
use any of several non-SQL tools and options to l​ oad data into BigQuery tables​. You can also 
use data pipeline sinks provided in A
​ pache Spark​ or ​Apache Beam​ to write data into BigQuery.   

UPDATE statement 
Most Redshift U
​ PDATE​statements are compatible with BigQuery. The following table shows 
exceptions. 
 
Redshift  BigQuery 
UPDATE table  UPDATE table 
SET column = expression [,...]  SET column = expression [,...] 
[FROM ...]  [FROM ...] 
  WHERE TRUE 
 
Note:​ All ​UPDATE​statements in BigQuery require a 
WHERE​keyword, followed by a condition. 

UPDATE table  UPDATE table 


SET column = DEFAULT [,...]  SET column = NULL [, ...] 
[FROM ...]  [FROM ...] 
[WHERE ...]  WHERE ... 
 
Note:​ BigQuery’s ​UPDATE​command does not support 
DEFAULT​values.  
 
If the Redshift ​UPDATE​statement does not include a 
WHERE​clause, the BigQuery ​UPDATE​statement should be 
conditioned ​WHERE TRUE​ . 
 

DELETE, TRUNCATE statements 


The D
​ ELETE​and ​TRUNCATE​statements are both ways to remove rows from a table without 
affecting the table schema or indexes.  
 
In Redshift, T
​ RUNCATE​is recommended over an unqualified ​DELETE​because it is faster and 
does not require a ​VACUUM​and ​ANALYZE​afterward. However, you can use D ​ ELETE​statements 
to achieve the same effect. 
 
In BigQuery, the D​ ELETE​statement must have a W​ HERE​clause. For more information about 
DELETE​in BigQuery, see the B ​ igQuery ​DELETE​examples​ in the DML documentation. 
 
Redshift  BigQuery 

 
 
32 
 
DELETE [FROM]​table_name  DELETE FROM​table_name 
  WHERE TRUE 
   
TRUNCATE [TABLE]​table_name   BigQuery ​DELETE​statements require a W
​ HERE 
clause. 
DELETE FROM table_name  DELETE FROM table_name 
USING other_table  WHERE table_name.id IN ( 
WHERE table_name.id=other_table.id  SELECT id 
FROM other_table 

 
 
DELETE FROM table_name 
WHERE EXISTS ( 
SELECT id 
FROM other_table 
WHERE table_name.id = other_table.id 

 
In Redshift, ​USING​allows additional tables to 
be referenced in the ​WHERE​clause. This can 
be achieved in BigQuery by using a subquery 
in the ​WHERE​clause. 
 

MERGE statement 
The M
​ ERGE​statement can combine I ​ NSERT​,U
​ PDATE​ , and D​ ELETE​operations into a single 
upsert​ statement and perform the operations atomically. The ​MERGE​operation must match at 
most one source row for each target row.  
 
Redshift does not support a single M
​ ERGE​command. However, a merge operation can be 
performed in Redshift by performing I ​ NSERT​ ,U
​ PDATE​ , and D
​ ELETE​operations in a transaction. 
 
Merge operation by replacing existing rows 
In Redshift, an overwrite of all of the columns in the target table can be performed using a 
DELETE​statement and then an I ​ NSERT​statement. The D​ ELETE​statement removes rows that 
should be updated, and then the I ​ NSERT​statement inserts the updated rows. BigQuery tables 
are limited to 1,000 DML statements per day, so you should consolidate I ​ NSERT​
,U
​ PDATE​
, and 
DELETE​statements into a single ​MERGE​statement as shown in the following table.   

 
 
33 
 
 
Redshift  BigQuery 
See P
​ erforming a merge operation by  MERGE target 
replacing existing rows​.  USING source 
  ON target.key = source.key 
CREATE TEMP TABLE temp_table;  WHEN MATCHED AND source.filter = 
  'filter_exp' THEN 
INSERT INTO temp_table  UPDATE SET  
SELECT *  target.col1 = source.col1, 
FROM source  target.col2 = source.col2, 
WHERE source.filter = 'filter_exp';  ... 
   
BEGIN TRANSACTION;  Note:​ All columns must be listed if updating 
  all columns. 
DELETE FROM target 
USING temp_table 
WHERE target.key = temp_table.key; 
 
INSERT INTO target 
SELECT * 
FROM temp_table; 
 
END TRANSACTION; 
 
DROP TABLE temp_table; 
See P
​ erforming a merge operation by  MERGE target 
specifying a column list​.  USING source 
  ON target.key = source.key 
CREATE TEMP TABLE temp_table;  WHEN MATCHED AND source.filter = 
  'filter_exp' THEN 
INSERT INTO temp_table  UPDATE SET  
SELECT *  target.col1 = source.col1, 
FROM source  target.col2 = source.col2 
WHERE source.filter = 'filter_exp'; 
 
BEGIN TRANSACTION; 
 
UPDATE target SET  
col1 = temp_table.col1, 
col2 = temp_table.col2 
FROM temp_table 
WHERE target.key=temp_table.key; 
 
INSERT INTO target 
SELECT * 
FROM  
 
 

 
 
34 
 

DDL syntax 
This section addresses differences in data definition language syntax between Redshift and 
BigQuery. 

SELECT INTO statement 


In Redshift, the ​SELECT INTO​statement can be used to insert the results of a query into a new 
table, combining table creation and insertion. 
 
Redshift  BigQuery 
SELECT expression, ...  INSERT table 
INTO table  SELECT expression, ... 
FROM ...  FROM ... 
WITH subquery_table AS (  INSERT table 
SELECT ...  WITH subquery_table AS ( 
)  SELECT ... 
SELECT expression, ...  ) 
INTO table  SELECT expression, ... 
FROM subquery_table  FROM subquery_table 
...  ... 
SELECT expression  BigQuery offers several ways to emulate 
INTO TEMP table  temporary tables. See the T
​ emporary tables 
FROM ...  section for more information. 
 
SELECT expression 
INTO TEMPORARY table 
FROM ... 
 

CREATE TABLE statement 


Most Redshift C
​ REATE TABLE​statements are compatible with BigQuery, except for the 
following syntax elements, which are not used in BigQuery: 
 
Redshift  BigQuery 
CREATE TABLE table_name 
CREATE TABLE table_name  ( 
(  col1 data_type1 NOT NULL, 
col1 data_type1 NOT NULL,  col2 data_type2, 
col2 data_type2 NULL,  col3 data_type3, 
col3 data_type3 UNIQUE,  col4 data_type4, 
col4 data_type4 PRIMARY KEY,  col5 data_type5, 
col5 data_type5  ) 
)   
 

 
 
35 
 
Note:​ ​UNIQUE​and P
​ RIMARY KEY​constraints are 
informational and ​are not enforced by the Redshift 
system​. 
CREATE TABLE table_name  CREATE TABLE table_name 
(  ( 
col1 data_type1[,...]  col1 data_type1[,...] 
table_constraints  ) 
)  PARTITION BY​column_name 
  CLUSTER BY​column_name [, ...] 
where​ table_constraints a ​re:   
[UNIQUE(column_name [, ... ])]  Note: B
​ igQuery does not use U ​ NIQUE​, 
[PRIMARY KEY(column_name [, ...])]  PRIMARY KEY​ , or ​FOREIGN​​KEY​table 
[FOREIGN KEY(column_name [, ...])    constraints. To achieve similar 
REFERENCES reftable [(refcolumn)]  optimization that these constraints 
 
provide during query execution, partition 
Note: U
​ NIQUE​and P
​ RIMARY KEY​constraints are 
and cluster your BigQuery tables. ​CLUSTER 
informational and ​are not enforced by the Redshift 
BY​supports up to 4 columns. 
system​. 
CREATE TABLE table_name  Reference t​ his example​ to learn how to 
LIKE original_table_name  use the I
​ NFORMATION_SCHEMA​tables to 
copy column names, data types, and ​NOT 
NULL​constraints to a new table.  
CREATE TABLE table_name  The ​BACKUP NO​table option is not used 
(  or needed because BigQuery 
col1 data_type1  automatically keeps up to 7 days of 

historical versions of all of your tables with 
BACKUP NO 
 
no effect on processing time or billed 
Note: I​ n Redshift, the B
​ ACKUP NO​setting​ is  storage. 
specified to save processing time and reduce   
storage space.” 
CREATE TABLE table_name  BigQuery supports clustering, which 
(  allows storing keys in sorted order. 
col1 data_type1 

table_attributes 
 
where​ table_attributes a
​re: 
[DISTSTYLE {AUTO|EVEN|KEY|ALL}]  
[DISTKEY (column_name)] 
[[COMPOUND|INTERLEAVED] SORTKEY  
(column_name [, ...])] 
   
CREATE TABLE table_name  CREATE TABLE table_name 
AS SELECT ...  AS SELECT ... 
CREATE TABLE IF NOT EXISTS 
CREATE TABLE IF NOT EXISTS table_name 
table_name 
... 
... 

 
 
36 
 
 
BigQuery also supports the DDL statement C​ REATE OR REPLACE TABLE​statement, which 
overwrites a table if it already exists. 
 
BigQuery’s C
​ REATE TABLE​statement also supports the following clauses, which do not have a 
Redshift equivalent: 
● PARTITION BY partition_statement 
● CLUSTER BY clustering_column_list 
● OPTIONS(table_options_list) 
 
For more information about C
​ REATE TABLE​in BigQuery, see the ​BigQuery​ ​CREATE TABLE 
examples​ in the DML documentation. 
 
Temporary tables 
Redshift supports temporary tables, which are only visible within the current session. There 
are several ways to emulate temporary tables in BigQuery: 

● Dataset TTL:​ Create a dataset that has a short time to live (for example, one hour) so 
that any tables created in the dataset are effectively temporary because they won’t 
persist longer than the dataset’s time to live. You can prefix all of the table names in 
this dataset with t
​ emp​to clearly denote that the tables are temporary. 
● Table TTL:​ Create a table that has a table-specific short time to live using DDL 
statements similar to the following: 
 
CREATE TABLE​temp.​
name​(​
col1​
, c
​ol2​
, ...) 
OPTIONS​
(expiration_timestamp=TIMESTAMP_ADD(CURRENT_TIMESTAMP(), INTERVAL 1 
HOUR)); 

CREATE VIEW statement 


The following table shows equivalents between Redshift and BigQuery for the C
​ REATE VIEW 
statement.  
 
Redshift  BigQuery 
CREATE VIEW v
​iew_name​AS SELECT ...  CREATE VIEW​view_name​AS SELECT ... 

CREATE OR REPLACE VIEW v


​iew_name​AS  CREATE OR REPLACE VIEW 
SELECT ...  view_name​AS  
SELECT ... 
CREATE VIEW v
​iew_name   CREATE VIEW​view_name  
(column_name, ...)  AS SELECT ... 
AS SELECT ... 

 
 
37 
 
Not supported.  CREATE VIEW IF NOT EXISTS 
view_name 
OPTIONS(​
view_option_list​

AS SELECT … 
 
Creates a new view only if the view does not 
exist in the specified dataset. 
CREATE VIEW v
​iew_name   In BigQuery, to create a view, all referenced 
AS SELECT ...  objects must already exist. 
WITH NO SCHEMA BINDING   
  BigQuery allows you to query e ​ xternal data 
In Redshift, a late binding view is required in  sources​. 
order to reference an external table. 

User-defined functions (​UDF​s) 


A UDF lets you create functions for custom operations. These functions accept columns of 
input, perform actions, and return the result of those actions as a value. 
 
Both Redshift and BigQuery support UDF using SQL expressions. Additionally, in Redshift you 
can create a P​ ython-based UDF​, and in BigQuery you can create a J ​ avaScript-based UDF​. 
 
Refer to the G
​ oogle Cloud BigQuery utilities GitHub repository​ for a library of common 
BigQuery UDFs. 

CREATE FUNCTION syntax 


The following table addresses differences in SQL UDF creation syntax between Redshift and 
BigQuery. 
 
Redshift  BigQuery 
CREATE [OR REPLACE] FUNCTION  CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
([sql_arg_name sql_arg_data_type[,..]])  ([sql_arg_name sql_arg_data_type[,..]]) 
RETURNS data_type  AS   
IMMUTABLE  sql_function_definition 
AS $$   
sql_function_definition  Note:​ In BigQuery ​SQL UDF​, return data type is 
$$ LANGUAGE sql  optional. BigQuery infers the result type of the 
  function from the SQL function body when a 
query calls the function. 
 

 
 
38 
 
CREATE [OR REPLACE] FUNCTION   CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
([sql_arg_name sql_arg_data_type[,..]])  ([sql_arg_name sql_arg_data_type[,..]]) 
RETURNS data_type  RETURNS data_type 
{ VOLATILE | STABLE | IMMUTABLE }   AS sql_function_definition  
AS $$   
sql_function_definition  Note: F ​ unction volatility is not a configurable 
$$ LANGUAGE sql  parameter in BigQuery. All BigQuery UDF 
  volatility is equivalent to Redshift’s ​IMMUTABLE 
volatility (that is, it does not do database lookups 
or otherwise use information not directly 
present in its argument list). 
 
CREATE [OR REPLACE] FUNCTION   CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
([sql_arg_name sql_arg_data_type[,..]])  ([sql_arg_name sql_arg_data_type[,..]]) 
RETURNS data_type  RETURNS data_type 
IMMUTABLE  AS sql_expression  
AS $$   
S
​​
ELECT_clause  Note: B​ igQuery supports any SQL expressions 
$$ LANGUAGE sql  as function definition. However, referencing 
  tables, views, or models is not supported. 
Note: R
​ edshift supports only a SQL ​SELECT 
clause as function definition. Also, the S
​ ELECT 
clause cannot include any of the F​ ROM​,I​ NTO​

WHERE​,G​ ROUP BY​
,O
​ RDER BY​, and ​LIMIT 
clauses.  
CREATE [OR REPLACE] FUNCTION   CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
([sql_arg_name sql_arg_data_type[,..]])  ([sql_arg_name sql_arg_data_type[,..]]) 
RETURNS data_type  RETURNS data_type 
IMMUTABLE  AS sql_function_definition  
AS $$   
sql_function_definition  Note: L
​ anguage literal need not be specified in 
$$ LANGUAGE sql  BigQuery SQL UDF. BigQuery interprets the SQL 
  expression by default. Also, the Redshift dollar 
quoting (​$$​
) is not supported in BigQuery.  
 
CREATE [OR REPLACE] FUNCTION  CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
(integer, integer)  (x INT64, y INT64) 
RETURNS integer  RETURNS INT64 
IMMUTABLE  AS   
AS $$  SELECT x + y 
SELECT $1 + $2   
$$ LANGUAGE sql  Note: B ​ igQuery UDFs require all input arguments 
to be named. The Redshift argument variables 
(​$1​
,$
​ 2​, …) are not supported in BigQuery. 
 
 
 
39 
 
CREATE [OR REPLACE] FUNCTION  CREATE [OR REPLACE] FUNCTION 
function_name   function_name  
(integer, integer)  (x ANY TYPE, y ANY TYPE) 
RETURNS integer  AS   
IMMUTABLE  SELECT x + y 
AS $$   
SELECT $1 + $2  Note: B
​ igQuery supports using ​ANY TYPE​as 
$$ LANGUAGE sql  argument type. The function accepts an input of 
  any type for this argument. For more 
Note:​ Redshift does not support ​ANY TYPE​for  information, see t​ emplated parameter​ in 
SQL UDFs. However, it supports using  BigQuery. 
ANYELEMENT​data type in python based UDFs.   
 
 
BigQuery also supports the ​CREATE FUNCTION IF NOT EXISTS​statement, which treats the 
query as successful and takes no action if a function with the same name already exists. 
 
BigQuery’s C ​ REATE FUNCTION​statement also supports creating ​TEMPORARY​or ​TEMP​functions​, 
which do not have a Redshift equivalent. 
 
See c
​ alling UDFs​ for details on executing a BigQuery persistent UDF. 

DROP FUNCTION syntax 


The following table addresses differences in ​DROP FUNCTION​syntax between Redshift and 
BigQuery. 
 
Redshift  BigQuery 
DROP FUNCTION   DROP FUNCTION 
function_name   dataset_name.function_name 
( [arg_name] arg_type [, ...] )   
[ CASCADE | RESTRICT ]  Note: B​ igQuery does not require using the 
  function’s signature for deleting the function. 
Also, removing function dependencies is not 
supported in BigQuery.  
 
BigQuery also supports the ​DROP FUNCTION IF EXISTS​statement, which deletes the function 
only if the function exists in the specified dataset. 
 
BigQuery requires that you ​specify the project_name​ if the function is not located in the 
current project. 

 
 
40 
 

UDF components 
This section highlights the similarities and differences in UDF components between R
​ edshift 
and ​BigQuery​. 
 
Component  Redshift  BigQuery 
Name  Redshift ​recommends​ using the  In BigQuery, you can use any custom 
prefix _
​ f​for function names to  function name. 
avoid conflicts with existing or 
future built-in SQL function names. 
Arguments   Arguments are optional. You can  Arguments are optional, but if you 
use name and data types for  specify arguments, they must use both 
Python UDF arguments and only  name and data types for both 
data types for SQL UDF arguments.  JavaScript and SQL UDFs. The 
In SQL UDF, you must refer to  maximum number of arguments for a 
arguments using ​$1​ , ​$2​
, and so on.  persistent UDF is 256. 
Redshift also ​restricts the number 
of arguments to 32​. 
Data type  Redshift supports a different set of  BigQuery supports a different set of 
data types for ​SQL​ and ​Python  data types for S
​ QL​ and ​JavaScript 
UDFs.   UDFs. 
   
For a Python UDF, the data type  For a SQL UDF, the data type might 
might also be ​ANYELEMENT​ .  also be A
​ NY TYPE​ . For more 
  information, see t​ emplated parameters 
You must specify a R ​ ETURN​data  in BigQuery. 
type for both SQL and Python   
UDFs.  The ​RETURN​data type is optional for 
  SQL UDFs. 
See ​Data types​ in this document for   
equivalents between data types in  See ​SQL type encodings in JavaScript 
Redshift and in BigQuery.   for information on how BigQuery data 
types map to JavaScript data types.  
Definition  For both SQL and Python UDFs, you  In BigQuery, you need to enclose the 
must enclose the function  JavaScript code in quotes. See ​Quoting 
definition using dollar quoting, as in  rules​ for more information. 
a pair of dollar signs (​$$​
), to   
indicate the start and end of the  For S
​ QL UDF​, you can use any SQL 
function statements.  expressions as the function definition. 
  However, BigQuery doesn’t support 
For S
​ QL UDF,​ Redshift supports  referencing tables, views, or models. 
only a SQL ​SELECT​clause as the   
function definition. Also, the   
SELECT​clause cannot include any 
 
 
41 
 
of the F​ ROM​, I
​ NTO​
,W ​ HERE​
,G
​ ROUP  For J​ avaScript UDF​, you can ​include 
BY​,O
​ RDER BY​ , and L
​ IMIT​clauses.  external code libraries​ directly using 
  the ​OPTIONS​section. You can also use 
For P​ ython UDF​, you can write a  the ​BigQuery UDF test tool​ to test your 
Python program using the P ​ ython  functions. 
2.7 Standard Library​ or import your 
custom modules by creating one 
using the ​CREATE LIBRARY 
command. 
Language  You must use the L​ ANGUAGE​literal  You need not specify L
​ ANGUAGE​for SQL 
to specify the language as either  UDF but must specify the language as 
sql​for SQL UDF or ​plpythonu​for  js​for JavaScript UDF. 
Python UDF. 
State  Redshift does not support creating  BigQuery supports both persistent and 
temporary UDFs.   temporary UDFs. You can reuse 
  persistent UDFs across multiple 
Redshift provides an option to  queries, whereas you can only use 
define the ​volatility of a function  temporary UDFs in a single query.  
using V
​ OLATILE​ ,S​ TABLE​ , or   
IMMUTABLE​literals. This is used for  Function volatility is not a configurable 
optimization by the query  parameter in BigQuery. All BigQuery 
optimizer.  UDF volatility is equivalent to Redshift’s 
IMMUTABLE​volatility. 
Security and  To create a UDF, you must have  Granting explicit permissions for 
privileges  permission for usage on language  creating or deleting any type of UDF is 
for SQL or plpythonu (Python). By  not necessary in BigQuery. Any user 
default, U
​ SAGE ON LANGUAGE SQL​is  assigned a role of B ​ igQuery Data Editor 
granted to P ​ UBLIC​, but you must  (having b​ igquery.routines.​ *​as one 
explicitly grant U
​ SAGE ON LANGUAGE  of the permissions) can create or 
PLPYTHONU​to specific users or  delete functions for the specified 
groups.  dataset.  
   
Also, you must be a superuser to  BigQuery also supports creating 
replace a UDF.  custom roles. This can be managed 
using C
​ loud IAM​. 
Limits  See ​Python UDF limits​.  See ​UDF limits​. 
 
 

   

 
 
42 
 

Metadata and transaction SQL statements 


Redshift  BigQuery 
SELECT * FROM ​
STL_ANALYZE​WHERE name  Not used in BigQuery. You don’t need to 
= 'T';  gather statistics in order to improve query 
performance. To get information about your 
data distribution, you can use a
​ pproximate 
aggregate functions​. 
ANALYZE​[[ table_name[(column_name  Not used in BigQuery. 
[, ...])]] 
LOCK TABLE​​
table_name;  Not used in BigQuery. 

BEGIN TRANSACTION​;  BigQuery uses snapshot isolation. For details, 


SELECT ...  see C​ onsistency guarantees​ elsewhere in 
END TRANSACTION​
;  this document. 
EXPLAIN​...  Not used in BigQuery. 
 
Similar features are the ​query plan 
explanation in the BigQuery Cloud Console 
and the slot allocation, and in a
​ udit logging in 
Cloud Monitoring​. 
SELECT * FROM ​
SVV_TABLE_INFO​WHERE  SELECT 
table = 'T';  * EXCEPT(is_typed) 
FROM 
mydataset.INFORMATION_SCHEMA.TABLES; 
 
For more information see I​ ntroduction to 
BigQuery ​INFORMATION_SCHEMA​ . 
VACUUM​[table_name]  Not used in BigQuery. BigQuery c ​ lustered 
tables are automatically sorted​.  

Multi-statement and multi-line SQL statements 


Redshift supports t​ ransactions (sessions)​ and therefore supports statements separated by 
semicolons that are consistently executed together.  
 
BigQuery scripting​ enables you to send multiple statements to BigQuery in one request, to 
use variables, and to use control flow statements such as I ​ F​and ​WHILE​
. In BigQuery, a script is 
a SQL statement list to be executed in sequence. A SQL statement list is a list of any valid 
BigQuery statements that are separated by semicolons. The scripting feature in BigQuery runs 
these statements as individual queries without rollback. 

 
 
43 
 

Procedural SQL statements 


CREATE PROCEDURE statement 

Redshift  BigQuery 
CREATE or REPLACE PROCEDURE  CREATE PROCEDURE​i​ f a name is required. 
Otherwise, use inline wit​h​ ​BEGIN​o
​ r in a single 
line with​ ​CREATE TEMP FUNCTION​ . 
CALL  CALL 
 

Variable declaration and assignment 

Redshift  BigQuery 
DECLARE  DECLARE 
Declares a variable of the specified type. 
SET  SET 
Sets a variable to have the value of the 
provided expression, or sets multiple 
variables at the same time based on the 
result of multiple expressions. 
 

Error condition handlers 


In Redshift, an error encountered during the execution of a stored procedure ends the 
execution flow, ends the transaction, and rolls back the transaction. This occurs because 
subtransactions are not supported.​.I​ n a Redshift-stored procedure, the only supported 
handler_statement​is R ​ AISE​. I​ n BigQuery, error handling is a core feature of the main control 
flow, similar to what other languages provide with ​TRY ... CATCH​blocks. 
 
Redshift  BigQuery 
BEGIN ... EXCEPTION WHEN OTHERS  BEGIN ... EXCEPTION WHEN ERROR THEN 
THEN 
RAISE  RAISE 
[ <<label>> ]  BEGIN 
[ DECLARE  BEGIN 
declarations ]  ... 
BEGIN  EXCEPTION WHEN ERROR THEN 
statements  SELECT 1/0; 
EXCEPTION  END; 
 
 
44 
 
WHEN OTHERS THEN  EXCEPTION WHEN ERROR THEN 
handler_statements  -- The exception thrown from the 
END;    inner exception handler lands here. 
  END; 
   

Cursor declarations and operations 


Because BigQuery doesn’t support cursors or sessions, the following statements aren’t used 
in BigQuery: 

● DECLARE c
​ursor_name​CURSOR​[FOR] ... 
● PREPARE​p
​lan_name​[ (datatype [, ...] ) ] AS ​
statement 
● OPEN​cursor_name FOR SELECT ...    
● FETCH​[ NEXT | ALL | {FORWARD [ count | ALL ] } ] FROM ​cursor_name 
● CLOSE​​
cursor_name​

 
If you’re using the cursor to return a result set, you can achieve similar behavior using 
temporary tables​ in BigQuery. 

Dynamic SQL statements 


The s​ cripting feature​ ​in BigQuery supports dynamic SQL statements like those shown in the 
following table. 
 
Redshift  BigQuery 
EXECUTE  EXECUTE IMMEDIATE 

Flow-of-control statements 
Redshift  BigQuery 
IF..THEN..ELSIF..THEN..ELSE..END IF  IF​​condition​​
THEN​​
stmts​​
ELSE​​
stmts​​
END 
IF 
name CURSOR [ ( arguments ) ] FOR  Cursors or sessions are not used in BigQuery. 
query 
[<<label>>]  LOOP 
LOOP  sql_statement_list 
statements  END LOOP; 
END LOOP [ label ]; 
WHILE condition LOOP stmts END LOOP  WHILE​​
condition​​
DO​​
stmts​​
END WHILE 

EXIT  BREAK 

 
 
45 
 

Consistency guarantees and transaction isolation 


Both Redshift and BigQuery are atomic—that is, ACID-compliant on a per-mutation level 
across many rows. 

Transactions 
Redshift supports s​ erializable isolation​ by default for transactions. Redshift lets you s​ pecify 
any of the four SQL standard transaction isolation levels but processes all isolation levels as 
serializable. 
 
BigQuery helps ensure o ​ ptimistic concurrency control​ (first to commit wins) with s​ napshot 
isolation​, in which a query reads the last committed data before the query starts. This 
approach guarantees the same level of consistency on a per-row, per-mutation basis and 
across rows within the same DML statement, yet avoids deadlocks. In the case of multiple 
DML updates against the same table, BigQuery switches to p ​ essimistic concurrency control​. 
Load jobs can run completely independently and append to tables. However, BigQuery does 
not yet provide an explicit transaction boundary or session. 

Rollback 
If Redshift encounters any error while running a stored procedure, it rolls back all changes 
made in a transaction . Additionally, you can use the R​ OLLBACK​transaction control statement 
in a stored procedure to discard all changes. There is also no concept of an explicit rollback in 
BigQuery because there is no explicit transaction boundary in BigQuery. The workarounds are 
table decorators​ or using F
​ OR SYSTEM_TIME AS OF​ . 

Database limits 
Check the B
​ igQuery public documentation​ for the latest quotas and limits. Many quotas for 
large-volume users can be raised by contacting the Cloud support team. The following table 
shows a comparison of the Redshift and BigQuery database limits.  
 
Limit  Redshift  BigQuery 
Tables per database for large and  9,900   Unrestricted 
xlarge cluster node types 
Tables per database for 8xlarge  20,000  Unrestricted 
cluster node types 
User-defined databases you can  60  Unrestricted 
create per cluster 
Maximum row size  4 MB  100 MB 

 
 
46 

You might also like