Vertica® Analytic Database 3.

5, Preview 2

SQL Reference Manual
Copyright© 2006-2009 Vertica Systems, Inc. Date of Publication: September 22, 2009

CONFIDENTIAL

Copyright© 2006-2009 Vertica Systems, Inc. and its licensors. All rights reserved. Vertica Systems, Inc. 8 Federal Street Billerica, MA 01821 Phone: (978) 600-1000 Fax: (978) 600-1001 E-Mail: info@vertica.com

Web site: http://www.vertica.com (http://www.vertica.com) The software described in this copyright notice is furnished under a license and may be used or copied only in accordance with the terms of such license. Vertica Systems, Inc. software contains proprietary information, as well as trade secrets of Vertica Systems, Inc., and is protected under international copyright law. Reproduction, adaptation, or translation, in whole or in part, by any means — graphic, electronic or mechanical, including photocopying, recording, -i-

SQL Reference Manual

taping, or storage in an information retrieval system — of any part of this work covered by copyright is prohibited without prior written permission of the copyright owner, except as allowed under the copyright laws. This product or products depicted herein may be protected by one or more U.S. or international patents or pending patents. Vertica Scripts This documentation might reference sample Vertica scripts that demonstrate and/or enhance functionality available in the Vertica® Analytic Database. These Vertica scripts are bound to the Terms and Conditions referred to in the Vertica Script EULA Agreement (http://www.vertica.com/termsofuse) Trademarks
Vertica™ and the Vertica® Analytic Database™ are trademarks of Vertica Systems, Inc.. Adobe®, Acrobat®, and Acrobat® Reader® are registered trademarks of Adobe Systems Incorporated. AMD™ is a trademark of Advanced Micro Devices, Inc., in the United States and other countries. Fedora™ is a trademark of Red Hat, Inc. Intel® is a registered trademark of Intel. Linux® is a registered trademark of Linus Torvalds. Microsoft® is a registered trademark of Microsoft Corporation. Novell® is a registered trademark and SUSE™ is a trademark of Novell, Inc., in the United States and other countries. Oracle® is a registered trademark of Oracle Corporation. Red Hat® is a registered trademark of Red Hat, Inc. VMware® is a registered trademark or trademark of VMware, Inc., in the United States and/or other jurisdictions.

Other products mentioned may be trademarks or registered trademarks of their respective companies. Open Source Software Acknowledgements Vertica makes no representations or warranties regarding any third party software. All third-party software is provided or recommended by Vertica on an AS IS basis. This product includes cryptographic software written by Eric Young (eay@cryptsoft.com). Boost Boost Software License - Version 1.38 - February 8th, 2009 Permission is hereby granted, free of charge, to any person or organization obtaining a copy of the software and accompanying documentation covered by this license (the "Software") to use, reproduce, display, distribute, execute, and transmit the Software, and to prepare derivative works of the Software, and to permit third-parties to whom the Software is furnished to do so, all subject to the following: The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software, in whole or in part, and all derivative works of the Software, unless such copies or -ii-

Contents

derivative works are solely in the form of machine-executable object code generated by a source language processor. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. bzip2 This file is a part of bzip2 and/or libbzip2, a program and library for lossless, block-sorting data compression. Copyright © 1996-2005 Julian R Seward. All rights reserved. 1 2 3 Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. The origin of this software must not be misrepresented; you must not claim that you wrote the original software. If you use this software in a product, an acknowledgment in the product documentation would be appreciated but is not required. Altered source versions must be plainly marked as such, and must not be misrepresented as being the original software. The name of the author may not be used to endorse or promote products derived from this software without specific prior written permission.

4 5

THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Julian Seward, Cambridge, UK. jseward@bzip.org <mailto:jseward@bzip.org> bzip2/libbzip2 version 1.0 of 21 March 2000 This program is based on (at least) the work of: Mike Burrows David Wheeler Peter Fenwick

-iii-

SQL Reference Manual Alistair Moffat Radford Neal Ian H. Witten Robert Sedgewick Jon L. Bentley

Ganglia Open Source License Copyright © 2001 by Matt Massie and The Regents of the University of California. All rights reserved. Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without written agreement is hereby granted, provided that the above copyright notice and the following two paragraphs appear in all copies of this software. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATION TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.

ICU (International Components for Unicode) License - ICU 1.8.1 and later COPYRIGHT AND PERMISSION NOTICE Copyright (c) 1995-2009 International Business Machines Corporation and others All rights reserved. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, provided that the above copyright notice(s) and this permission notice appear in all copies of the Software and that both the above copyright notice(s) and this permission notice appear in supporting documentation. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM, OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES, OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN -iv-

Contents

CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Except as contained in this notice, the name of a copyright holder shall not be used in advertising or otherwise to promote the sale, use or other dealings in this Software without prior written authorization of the copyright holder. All trademarks and registered trademarks mentioned herein are the property of their respective owners. Lighttpd Open Source License Copyright © 2004, Jan Kneschke, incremental All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. Neither the name of the 'incremental' nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

1 2 3

4

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. MIT Kerberos Copyright © 1985-2007 by the Massachusetts Institute of Technology. Export of software employing encryption from the United States of America may require a specific license from the United States Government. It is the responsibility of any person or organization contemplating export to obtain such a license before exporting. WITHIN THAT CONSTRAINT, permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of M.I.T. not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. Furthermore if you modify this software you must label your software as modified software and not distribute it in such a fashion that it might be confused with the original MIT -v-

SQL Reference Manual

software. M.I.T. makes no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. Individual source code files are copyright MIT, Cygnus Support, Novell, OpenVision Technologies, Oracle, Red Hat, Sun Microsystems, FundsXpress, and others. Project Athena, Athena, Athena MUSE, Discuss, Hesiod, Kerberos, Moira, and Zephyr are trademarks of the Massachusetts Institute of Technology (MIT). No commercial use of these trademarks may be made without prior written permission of MIT. “Commercial use” means use of a name in a product or other for-profit manner. It does NOT prevent a commercial firm from referring to the MIT trademarks in order to convey information (although in doing so, recognition of their trademark status should be given). Portions of src/lib/crypto have the following copyright: Copyright © 1998 by the FundsXpress, INC. All rights reserved. Export of this software from the United States of America may require a specific license from the United States Government. It is the responsibility of any person or organization contemplating export to obtain such a license before exporting. WITHIN THAT CONSTRAINT, permission to use, copy, modify, and distribute this software and its documentation for any purpose and without fee is hereby granted, provided that the above copyright notice appear in all copies and that both that copyright notice and this permission notice appear in supporting documentation, and that the name of FundsXpress. not be used in advertising or publicity pertaining to distribution of the software without specific, written prior permission. FundsXpress makes no representations about the suitability of this software for any purpose. It is provided “as is” without express or implied warranty. THIS SOFTWARE IS PROVIDED “AS IS” AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. The implementation of the AES encryption algorithm in src/lib/crypto/aes has the following copyright: Copyright © 2001, Dr Brian Gladman <brg@gladman.uk.net>, Worcester, UK. All rights reserved. LICENSE TERMS The free distribution and use of this software in both source and binary form is allowed (with or without changes) provided that: 1 Distributions of this source code include the above copyright notice, this list of conditions and the following disclaimer. 2 Distributions in binary form include the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other associated materials. 3 The copyright holder's name is not used to endorse products built using this software without specific written permission. -vi-

Contents

DISCLAIMER This software is provided 'as is' with no explicit or implied warranties in respect of any properties, including, but not limited to, correctness and fitness for purpose. The implementations of GSSAPI mechglue in GSSAPI-SPNEGO in src/lib/gssapi, including the following files:
lib/gssapi/generic/gssapi_err_generic.et lib/gssapi/mechglue/g_accept_sec_context.c lib/gssapi/mechglue/g_acquire_cred.c lib/gssapi/mechglue/g_canon_name.c lib/gssapi/mechglue/g_compare_name.c lib/gssapi/mechglue/g_context_time.c lib/gssapi/mechglue/g_delete_sec_context.c lib/gssapi/mechglue/g_dsp_name.c lib/gssapi/mechglue/g_dsp_status.c lib/gssapi/mechglue/g_dup_name.c lib/gssapi/mechglue/g_exp_sec_context.c lib/gssapi/mechglue/g_export_name.c lib/gssapi/mechglue/g_glue.c lib/gssapi/mechglue/g_imp_name.c lib/gssapi/mechglue/g_imp_sec_context.c lib/gssapi/mechglue/g_init_sec_context.c lib/gssapi/mechglue/g_initialize.c lib/gssapi/mechglue/g_inquire_context.c lib/gssapi/mechglue/g_inquire_cred.c lib/gssapi/mechglue/g_inquire_names.c lib/gssapi/mechglue/g_process_context.c lib/gssapi/mechglue/g_rel_buffer.c lib/gssapi/mechglue/g_rel_cred.c lib/gssapi/mechglue/g_rel_name.c lib/gssapi/mechglue/g_rel_oid_set.c lib/gssapi/mechglue/g_seal.c lib/gssapi/mechglue/g_sign.c lib/gssapi/mechglue/g_store_cred.c lib/gssapi/mechglue/g_unseal.c lib/gssapi/mechglue/g_userok.c lib/gssapi/mechglue/g_utils.c lib/gssapi/mechglue/g_verify.c lib/gssapi/mechglue/gssd_pname_to_uid.c lib/gssapi/mechglue/mglueP.h lib/gssapi/mechglue/oid_ops.c lib/gssapi/spnego/gssapiP_spnego.h lib/gssapi/spnego/spnego_mech.c

are subject to the following license: -vii-

SQL Reference Manual

Copyright © 2004 Sun Microsystems, Inc. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Npgsql-.Net Data Provider for Postgresql Copyright © 2002-2008, The Npgsql Development Team Permission to use, copy, modify, and distribute this software and its documentation for any purpose, without fee, and without a written agreement is hereby granted, provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. IN NO EVENT SHALL THE NPGSQL DEVELOPMENT TEAM BE LIABLE TO ANY PARTY FOR DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE NPGSQL DEVELOPMENT TEAM HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE NPGSQL DEVELOPMENT TEAM SPECIFICALLY DISCLAIMS ANY WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS, AND THE NPGSQL DEVELOPMENT TEAM HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS. Open LDAP The OpenLDAP Public License Version 2.8, 17 August 2003 Redistribution and use of this software and associated documentation ("Software"), with or without modification, are permitted provided that the following conditions are met: 1 2 Redistributions in source form must retain copyright statements and notices, Redistributions in binary form must reproduce applicable copyright statements and notices, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution, and -viii-

Contents

3 Redistributions must contain a verbatim copy of this document. The OpenLDAP Foundation may revise this license from time to time. Each revision is distinguished by a version number. You may use this Software under terms of this license revision or under the terms of any subsequent revision of the license. THIS SOFTWARE IS PROVIDED BY THE OPENLDAP FOUNDATION AND ITS CONTRIBUTORS ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OPENLDAP FOUNDATION, ITS CONTRIBUTORS, OR THE AUTHOR(S) OR OWNER(S) OF THE SOFTWARE BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. The names of the authors and copyright holders must not be used in advertising or otherwise to promote the sale, use or other dealing in this Software without specific, written prior permission. Title to copyright in this Software shall at all times remain with copyright holders. OpenLDAP is a registered trademark of the OpenLDAP Foundation.
Copyright 1999-2003 The OpenLDAP Foundation, Redwood City, California, USA. All Rights Reserved. Permission to copy and distribute verbatim copies of this document is granted.

Open SSL OpenSSL License Copyright © 1998-2008 The OpenSSL Project. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1 2 Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. All advertising materials mentioning features or use of this software must display the following acknowledgment: "This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit. (http://www.openssl.org/)" The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to endorse or promote products derived from this software without prior written permission. For written permission, please contact openssl-core@openssl.org. Products derived from this software may not be called "OpenSSL" nor may "OpenSSL" appear in their names without prior written permission of the OpenSSL Project. -ix-

3

4

5

SQL Reference Manual

6

Redistributions of any form whatsoever must retain the following acknowledgment: "This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/)"

THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. Perl Artistic License The "Artistic License" Preamble The intent of this document is to state the conditions under which a Package may be copied, such that the Copyright Holder maintains some semblance of artistic control over the development of the package, while giving the users of the package the right to use and distribute the Package in a more-or-less customary fashion, plus the right to make reasonable modifications. Definitions: "Package" refers to the collection of files distributed by the Copyright Holder, and derivatives of that collection of files created through textual modification. "Standard Version" refers to such a Package if it has not been modified, or has been modified in accordance with the wishes of the Copyright Holder as specified below. "Copyright Holder" is whoever is named in the copyright or copyrights for the package. "You" is you, if you're thinking about copying or distributing this Package. "Reasonable copying fee" is whatever you can justify on the basis of media cost, duplication charges, time of people involved, and so on. (You will not be required to justify it to the Copyright Holder, but only to the computing community at large as a market that must bear the fee.) "Freely Available" means that no fee is charged for the item itself, though there may be fees involved in handling the item. It also means that recipients of the item may redistribute it under the same conditions they received it. 1 You may make and give away verbatim copies of the source form of the Standard Version of this Package without restriction, provided that you duplicate all of the original copyright notices and associated disclaimers. 2 You may apply bug fixes, portability fixes and other modifications derived from the Public Domain or from the Copyright Holder. A Package modified in such a way shall still be considered the Standard Version. -x-

uu. and provide a separate manual page for each non-standard executable that clearly documents how it differs from the Standard Version. If such scripts or library files are aggregated with this Package via the so-called "undump" or "unexec" methods of producing a binary executable image. and clearly document the differences in manual pages (or equivalent). You may charge any fee you choose for support of this Package. b) accompany the distribution with the machine-readable source of the Package with your modifications. provided that you insert a prominent notice in each changed file stating how and when you changed that file. b) use the modified Package only within your corporation or organization.net. The scripts and library files supplied as input to or produced as output from the programs of this Package do not automatically fall under the copyright of this Package. and provided that you do at least ONE of the following: a) place your modifications in the Public Domain or otherwise make them Freely Available. You may not charge a fee for this Package itself. c) rename any non-standard executables so the names do not conflict with standard executables. but belong to whoever generated them. when no overt attempt is made to make this -xi- . then distribution of such an image shall neither be construed as a distribution of this Package nor shall it fall under the restrictions of Paragraphs 3 and 4. that is. C subroutines (or comparably compiled subroutines in other languages) supplied by you and linked into this Package in order to emulate subroutines and variables of the language defined by this Package shall not be considered part of this Package.Contents 3 4 5 6 7 8 You may otherwise modify your copy of this Package in any way. You may distribute the programs of this Package in object code or executable form. d) make other distribution arrangements with the Copyright Holder. you may distribute this Package in aggregate with other (possibly commercial) programs as part of a larger (possibly commercial) software distribution provided that you do not advertise this Package as a product of your own. and may be sold commercially. or placing the modifications on a major archive site such as uunet. this shall be construed as a mere form of aggregation. provided that you do at least ONE of the following: a) distribute a Standard Version of the executables and library files. which must also be provided. You may charge a reasonable copying fee for any distribution of this Package. but are the equivalent of input as in Paragraph 6. d) make other distribution arrangements with the Copyright Holder. or by allowing the Copyright Holder to include your modifications in the Standard Version of the Package. You may embed this Package's interpreter within an executable of yours (by linking). such as by posting said modifications to Usenet or an equivalent medium. and may be aggregated with this Package. c) give non-standard executables non-standard names. provided that the complete Standard Version of the interpreter is so embedded. provided that you do not represent such an executable image as a Standard Version of this Package. However. together with instructions on where to get the Standard Version. together with instructions (in the manual page or equivalent) on where to get the Standard Version. provided these subroutines do not change the language in any way that would cause it to fail the regression tests for the language. Aggregation of this Package with a commercial distribution is always permitted provided that the use of this Package is embedded.

BUT NOT LIMITED TO.net/software/>". SPECIAL. Products derived from this software may not be called "PHP". you may always continue to use it under the terms of that version. Such use shall not be construed as a distribution of this Package. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. WHETHER IN CONTRACT. with or without modification. For written permission.SQL Reference Manual Package's interfaces visible to the end user of the commercial distribution. You may also choose to use such covered code under the terms of any subsequent version of the license published by the PHP Group. THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. OR CONSEQUENTIAL DAMAGES (INCLUDING.net. 9 The name of the Copyright Holder may not be used to endorse or promote products derived from this software without specific prior written permission. Once covered code has been published under a particular version of the license. No one other than the PHP Group has the right to modify the terms applicable to covered code created under this License.2009 The PHP Group. STRICT LIABILITY. You may indicate that your software works in conjunction with PHP by saying "Foo for PHP" instead of calling it "PHP Foo" or "phpfoo" The PHP Group may publish revised and/or new versions of the license from time to time. 3 4 5 Redistributions of any form whatsoever must retain the following acknowledgment: "This product includes PHP software.01 Copyright © 1999 . IN NO EVENT SHALL THE PHP DEVELOPMENT TEAM OR ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT. BUT NOT LIMITED TO. Redistribution and use in source and binary forms. DATA. PHP License The PHP License. All rights reserved. INDIRECT. WITHOUT LIMITATION. INCLUDING. INCIDENTAL. freely available from <http://www. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. 10 THIS PACKAGE IS PROVIDED "AS IS" AND WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES. OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. without prior written permission from group@php. Redistributions in binary form must reproduce the above copyright notice. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. nor may "PHP" appear in their name. is permitted provided that the following conditions are met: 1 2 Redistributions of source code must retain the above copyright notice. EXEMPLARY. -xii- 6 . EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. please contact group@php. LOSS OF USE. Each version will be given a distinguishing version number. The name "PHP" must not be used to endorse or promote products derived from this software without prior written permission. INCLUDING. OR PROFITS. this list of conditions and the following disclaimer. version 3. THIS SOFTWARE IS PROVIDED BY THE PHP DEVELOPMENT TEAM ``AS IS'' AND ANY EXPRESSED OR IMPLIED WARRANTIES.php.net.

freely available at <http://www. BUT NOT LIMITED TO. SUPPORT. SPECIAL. The Regents of the University of California Permission to use.ru> Florent Rougon <flo@via.fr> Copyright © 2000 Robb Shecter. provided that the above copyright notice and this paragraph and the following two paragraphs appear in all copies. INCIDENTAL. Python Dialog The Administration Tools part of this product uses Python Dialog. INCLUDING. AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATIONS TO PROVIDE MAINTENANCE. THE SOFTWARE PROVIDED HEREUNDER IS ON AN "AS IS" BASIS. without fee. modify. 2004 Florent Rougon License: -xiii- . INCLUDING LOST PROFITS. IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR DIRECT.com>.se> Robb Shecter <robb@acm. EVEN IF THE UNIVERSITY OF CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.org> Sultanbek Tezadov <http://sultan. OR MODIFICATIONS. The PHP Group can be contacted via Email at group@php. and without a written agreement is hereby granted. THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. The PostgreSQL Global Development Group Portions Copyright © 1994.da. ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION. copy. INDIRECT. UPDATES.zend.php.net. PHP includes the Zend Engine. a Python module for doing console-mode user interaction. 2003. and distribute this software and its documentation for any purpose. For more information on the PHP Group and the PHP project. PostgreSQL This product uses the PostgreSQL Database Management System(formerly known as Postgres. OR CONSEQUENTIAL DAMAGES.Contents This software consists of voluntary contributions made by many individuals on behalf of the PHP Group.ecp. then as Postgres95) Portions Copyright © 1996-2005. ENHANCEMENTS.net>. Upstream Author: Peter Astrand <peter@cendio. please see <http://www. Sultanbek Tezadov Copyright © 2002.

Boston.com/licenses/pythondialog-2. you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation. MA 02111-1307. without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.Round Robin Database Tool A tool for fast logging of numerical data graphical display of this data. if not.0 of the GPL: You are free to distribute a Derivative Work that is formed entirely from the Program and one or more works (each. Boston. Inc.com/company/legal/licensing/foss-exception. Fifth Floor. See the GNU General Public License for more details. RRDTOOL . but WITHOUT ANY WARRANTY. You should have received a copy of the GNU General Public License along with this program. Inc. This package is distributed in the hope that it is useful.tar. either version 2 of the License. See the GNU Lesser General Public License for more details.vertica. you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation.. or (at your option) any later version.Suite 330. a "FLOSS Work") licensed under one or more of the licenses listed below. 59 Temple Place . write to the Free Software Foundation. MA 02110-1301 USA The complete source code of the Python dialog package and complete text of the GNU Lesser General Public License can be found on the Vertica Systems Web site at http://www. This program is distributed in the hope that it will be useful. As a special exception to the terms and conditions of version 2. if not. either version 2 of the License. USA FLOSS License Exception (Adapted from http://www.7.7. write to the Free Software Foundation. but WITHOUT ANY WARRANTY. GNU GPL License This program is free software. You should have received a copy of the GNU Lesser General Public License along with this package.vertica.com/licenses/pythondialog-2..bz2 http://www. RRDTool allows the graphs displayed by ganglia-web to be produced.html) I want specified Free/Libre and Open Source Software ("FLOSS") applications to be able to use specified GPL-licensed RRDtool libraries (the "Program") despite the fact that not all FLOSS licenses are compatible with version 2 of the GNU General Public License (the "GPL").tar.mysql.bz2 RRDTool Open Source License Note: rrdtool is a dependency of using the ganglia-web third-party tool. 51 Franklin St. -xiv- .SQL Reference Manual This package is free software. or (at your option) any later version. without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. Copyright © 1998-2008 Tobias Oetiker All rights reserved.

0 "2003" ("1998") 3.0/1.0 Version(s)/Copyright Date 1.0 "July 22 1999" 1.0/1. modified. If the above conditions are not met.0 2. can reasonably be considered independent and separate works in themselves which are not derivatives of either the Program.1 2.0 1.0 1.1 Spread -xv- .0 From Perl 5.1 1. 3 Any works which are aggregated with the Program or with a Derivative Work on a volume of a storage or distribution medium in accordance with the GPL.8. except for identifiable sections of the Derivative Work which are not derived from the Program.0 2.1.Contents as long as: 1 You obey the GPL in all respects for the Program and the Derivative Work. and which can reasonably be considered independent and separate works in themselves § are distributed subject to one of the FLOSS licenses listed below. and § the object code or executable form of those sections are accompanied by the complete corresponding machine-readable source code for those sections on the same medium and under the same FLOSS license as the corresponding object code or executable forms of those sections.txt) Mozilla Public License (MPL) Open Software License OpenSSL license (with original SSLeay license) PHP License Python license (CNRI Python License) Python Software Foundation License Sleepycat License W3C License X11 License Zlib/libpng License Zope Public License 2. distributed or used under the terms and conditions of the GPL. Version Jabber Open Source License MIT License (As listed in file MIT-License.0/2.1 "1999" "2001" "2001" 2.1/2. then the Program may only be copied.0/2.0 2. a Derivative Work or a FLOSS Work. FLOSS License List License name Academic Free License Apache Software License Apple Public Source License Artistic license BSD license Common Public License GNU Library or "Lesser" General Public License (LGPL) IBM Public License. and which can reasonably be considered independent and separate works in themselves 2 All identifiable sections of the Derivative Work which are not derived from the Program.

All advertising materials (including web pages) mentioning features or use of this software.SQL Reference Manual This product uses software developed by Spread Concepts LLC for use in the Spread toolkit. OR FOR LOSS OF INFORMATION OR ANY OTHER LOSS. SUITS OR OTHER ACTIONS ARISING DIRECTLY OR INDIRECTLY FROM YOUR ACCEPTANCE AND USE OF SPREAD. WITHOUT WARRANTY OF ANY KIND. SPREAD IS PROVIDED UNDER THIS LICENSE ON AN AS IS BASIS. INDIRECT. Redistributions of any form whatsoever must retain the following acknowledgment: "This product uses software developed by Spread Concepts LLC for use in the Spread toolkit. DEMANDS. without reference to its conflicts of law provisions. For more information about Spread see http://www.spread. we at Spread Concepts would appreciate it if active users of -xvi- . this list of conditions and the following disclaimer and request in the documentation and/or other materials provided with the distribution. DEFEND AND HOLD HARMLESS THE COPYRIGHT HOLDERS AND CONTRIBUTORS OF SPREAD AGAINST ALL CLAIMS. 3 4 5 6 7 TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW. FIT FOR A PARTICULAR PURPOSE OR NON-INFRINGING.org). For more information about Spread.org" The names "Spread" or "Spread toolkit" must not be used to endorse or promote products derived from this software without prior written permission. REVENUE. WITHOUT LIMITATION.spread. THIS DISCLAIMER OF WARRANTY CONSTITUTES AN ESSENTIAL PART OF THIS LICENSE.org (http://www. For more information about Spread see http://www. The exclusive jurisdiction and venue for all legal actions relating to this license shall be in courts of competent subject matter jurisdiction located in the State of Maryland. are permitted provided that the following conditions are met: 1 2 Redistributions of source code must retain the above copyright notice. Although NOT REQUIRED.spread. EITHER EXPRESSED OR IMPLIED. REPAIR OR CORRECTION. WARRANTIES THAT SPREAD IS FREE OF DEFECTS. see http://www. All rights reserved. with or without modification.spread. TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW. YOU EXPRESSLY AGREE TO FOREVER INDEMNIFY. OR CONSEQUENTIAL DAMAGES FOR LOSS OF PROFITS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR ANY OTHER CONTRIBUTOR BE LIABLE FOR ANY SPECIAL. Copyright (c) 1993-2006 Spread Concepts LLC. INCLUDING. MERCHANTABLE. NO USE OF ANY CODE IS AUTHORIZED HEREUNDER EXCEPT UNDER THIS DISCLAIMER. ALL WARRANTIES ARE DISCLAIMED AND THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE CODE IS WITH YOU. or software that uses this software. YOU (NOT THE COPYRIGHT HOLDER OR ANY OTHER CONTRIBUTOR) ASSUME THE COST OF ANY NECESSARY SERVICING. Redistribution and use in source and binary forms. INCIDENTAL.org" This license shall be governed by and construed and enforced in accordance with the laws of the State of Maryland. must display the following acknowledgment: "This product uses software developed by Spread Concepts LLC for use in the Spread toolkit. SHOULD ANY CODE PROVE DEFECTIVE IN ANY RESPECT. this list of conditions and the following disclaimer and request. Redistributions in binary form must reproduce the above copyright notice.

Networks Associates Technology. and any comments they have through either e-mail (spread@spread. INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. Inc copyright notice (BSD) Copyright © 2001-2003. From 2001 onwards. DATA OR PROFITS. An additional copyright section has been added as Part 3 below also under a BSD license for the work contributed by Cambridge Broadband Ltd. and Networks Associates Technology. covering all derivative work done since then. copy. the project was based at UC Davis. CMU AND THE REGENTS OF THE UNIVERSITY OF CALIFORNIA DISCLAIM ALL WARRANTIES WITH REGARD TO THIS SOFTWARE. NEGLIGENCE OR OTHER TORTIOUS ACTION. Part 2: Networks Associates Technology. Part 1: CMU/UCD copyright notice: (BSD like) Copyright © 1989. Inc All rights reserved. Up until 2001. 1998-2000 The Regents of the University of California All Rights Reserved Permission to use. -xvii- .Contents Spread put a link on their web site to Spread's web site when possible.org/comments). INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM THE LOSS OF USE. how they are using Spread. Inc. WHETHER IN AN ACTION OF CONTRACT.spread.1996. 1992 by Carnegie Mellon University Derivative Work . to the project since 2003. IN NO EVENT SHALL CMU OR THE REGENTS OF THE UNIVERSITY OF CALIFORNIA BE LIABLE FOR ANY SPECIAL. Please make sure that you read all the parts. and a full list of contributors can be found in the README file under the THANKS section. provided that the above copyright notice appears in all copies and that both that copyright notice and this permission notice appear in supporting documentation. to the project since 2001. We also encourage users to let us know who they are. the project has been based at SourceForge. and the first part covers all code written during this time. ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Inc hold the copyright on behalf of the wider Net-SNMP community. listed in various separate parts below. Code has been contributed to this project by many people over the years it has been in development.org) or our web site at (http://www. 1991. An additional copyright section has been added as Part 4 below also under a BSD license for the work contributed by Sun Microsystems. and that the name of CMU and The Regents of the University of California not be used in advertising or publicity pertaining to distribution of the software without specific written permission. 1998-2000 Copyright © 1996. SNMP Various copyrights apply to this package. modify and distribute this software and its documentation for any purpose and without fee is hereby granted.

with or without modification. OR CONSEQUENTIAL DAMAGES (INCLUDING. • Redistributions in binary form must reproduce the above copyright notice. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. OR PROFITS. STRICT LIABILITY. BUT NOT LIMITED TO. BUT NOT LIMITED TO. SPECIAL. BUT NOT LIMITED TO. Cambridge Broadband Ltd. Part 3: Cambridge Broadband Ltd. INDIRECT. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. DATA. Redistributions in binary form must reproduce the above copyright notice. INCIDENTAL. are permitted provided that the following conditions are met: • • • Redistributions of source code must retain the above copyright notice. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. Inc nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. INCLUDING. LOSS OF USE. OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice. INCLUDING. INDIRECT. EXEMPLARY. DATA. EVEN IF ADVISED OF THE -xviii- . IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT. OR CONSEQUENTIAL DAMAGES (INCLUDING. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. may not be used to endorse or promote products derived from this software without specific prior written permission. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. EXEMPLARY. INCIDENTAL.SQL Reference Manual Redistribution and use in source and binary forms. STRICT LIABILITY. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. copyright notice (BSD) Portions of this code are copyright (c) 2001-2003. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. this list of conditions and the following disclaimer. WHETHER IN CONTRACT. SPECIAL. • Neither the name of the Networks Associates Technology. OR PROFITS OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. LOSS OF USE. with or without modification. this list of conditions and the following disclaimer. Redistribution and use in source and binary forms. WHETHER IN CONTRACT. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. All rights reserved. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. The name of Cambridge Broadband Ltd. BUT NOT LIMITED TO.

with or without modification. Sun Microsystems. Sun. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT.A. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. are permitted provided that the following conditions are met: • • • Redistributions of source code must retain the above copyright notice. and other countries. -xix- .S. Redistribution and use in source and binary forms. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. Use is subject to license terms below. Inc. Sparta. OR PROFITS. this list of conditions and the following disclaimer. SPECIAL. • Redistributions in binary form must reproduce the above copyright notice. are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice. with or without modification. This distribution may include materials developed by third parties. Part 4: Sun Microsystems. EXEMPLARY. Inc nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. Inc copyright notice (BSD) Copyright © 2003-2006. STRICT LIABILITY. WHETHER IN CONTRACT. Redistribution and use in source and binary forms. this list of conditions and the following disclaimer. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. Inc. BUT NOT LIMITED TO. Neither the name of Sparta. California 95054. OR CONSEQUENTIAL DAMAGES (INCLUDING. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.. INCIDENTAL. nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. Santa Clara. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. BUT NOT LIMITED TO. Inc.S. copyright notice (BSD) Copyright © 2003 Sun Microsystems. Inc. the Sun logo and Solaris are trademarks or registered trademarks of Sun Microsystems. U. DATA. Part 5: Sparta. • Neither the name of the Sun Microsystems. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. in the U.Contents POSSIBILITY OF SUCH DAMAGE. INDIRECT. Inc All rights reserved. 4150 Network Circle. LOSS OF USE. INCLUDING. All rights reserved. Redistributions in binary form must reproduce the above copyright notice.

Part 7: Fabasoft R&D Software GmbH & Co KG copyright notice (BSD) Copyright © Fabasoft R&D Software GmbH & Co KG. OR CONSEQUENTIAL DAMAGES (INCLUDING. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. Inc and Information Network Center of Beijing University of Posts and Telecommunications. STRICT LIABILITY. OR PROFITS. nor the names of their contributors may be used to endorse or promote products derived from this software without specific prior written permission.SQL Reference Manual THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. DATA. 2003 oss@fabasoft. OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. EXEMPLARY. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. OR CONSEQUENTIAL DAMAGES (INCLUDING. INCLUDING. LOSS OF USE. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. Redistribution and use in source and binary forms. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. • Neither the name of Cisco. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. this list of conditions and the following disclaimer. LOSS OF USE. INDIRECT. Part 6: Cisco/BUPTNIC copyright notice (BSD) Copyright © 2004. STRICT LIABILITY. EXEMPLARY. SPECIAL. • Redistributions in binary form must reproduce the above copyright notice. WHETHER IN CONTRACT. All rights reserved. WHETHER IN CONTRACT. are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice. Beijing University of Posts and Telecommunications.com Author: Bernhard Penz Redistribution and use in source and binary forms. BUT NOT LIMITED TO. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT. with or without modification. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. this list of conditions -xx- . OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. DATA. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. BUT NOT LIMITED TO. Inc. BUT NOT LIMITED TO. INCIDENTAL. INCLUDING. INCIDENTAL. SPECIAL. Cisco. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT. BUT NOT LIMITED TO. with or without modification. OR PROFITS. INDIRECT. are permitted provided that the following conditions are met: • Redistributions of source code must retain the above copyright notice.

OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. All rights reserved. to any person obtaining a copy of this software and associated documentation files (the "Software"). Tecla Command-line Editing Copyright (c) 2000 by Martin C. DATA OR PROFITS. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR HOLDERS INCLUDED IN THIS NOTICE BE LIABLE FOR ANY CLAIM. the name of a copyright holder shall not be used in advertising or otherwise to promote the sale. Redistributions in binary form must reproduce the above copyright notice. use or other dealings in this Software without prior written authorization of the copyright holder. distribute. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. WHETHER IN AN ACTION OF CONTRACT. BUT NOT LIMITED TO. and to permit persons to whom the Software is furnished to do so.Contents • and the following disclaimer. provided that the above copyright notice(s) and this permission notice appear in all copies of the Software and that both the above copyright notice(s) and this permission notice appear in supporting documentation. EXEMPLARY. DATA. and/or sell copies of the Software. ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Webmin Open Source License Copyright © Jamie Cameron -xxi- . Shepherd. OR ANY SPECIAL INDIRECT OR CONSEQUENTIAL DAMAGES. OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE. modify. INCIDENTAL. FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. INDIRECT. Except as contained in this notice. OR PROFITS. to deal in the Software without restriction. brand or product names may not be used to endorse or promote products derived from this software without specific prior written permission. Permission is hereby granted. WHETHER IN CONTRACT. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY DIRECT. INCLUDING. LOSS OF USE. BUT NOT LIMITED TO. STRICT LIABILITY. THE SOFTWARE IS PROVIDED "AS IS". EXPRESS OR IMPLIED. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. OR CONSEQUENTIAL DAMAGES (INCLUDING. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. NEGLIGENCE OR OTHER TORTIOUS ACTION. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY. merge. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. including without limitation the rights to use. publish. • The name of Fabasoft R&D Software GmbH & Co KG or any of its subsidiaries. copy. WITHOUT WARRANTY OF ANY KIND. SPECIAL. free of charge.

OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY. INCIDENTAL. this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. If you use this software in a product. zlib This is used by the project to load zipped files directly by COPY command. July 18th. 3 Neither the name of the developer nor the names of contributors may be used to endorse or promote products derived from this software without specific prior written permission.h -.edu -xxii- . LOSS OF USE. and to alter it and redistribute it freely.interface of the 'zlib' general purpose compression library version 1. 2 Altered source versions must be plainly marked as such. Jean-loup Gailly jloup@gzip. BUT NOT LIMITED TO. OR PROFITS. without any express or implied warranty. this list of conditions and the following disclaimer. with or without modification. www. PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES. subject to the following restrictions: 1 The origin of this software must not be misrepresented. DATA. OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE. are permitted provided that the following conditions are met: 1 Redistributions of source code must retain the above copyright notice. EXEMPLARY. SPECIAL.2. INDIRECT. THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.net/ zlib. STRICT LIABILITY. THIS SOFTWARE IS PROVIDED BY THE DEVELOPER ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES. EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. IN NO EVENT SHALL THE DEVELOPER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT. INCLUDING. 3 This notice may not be removed or altered from any source distribution. Permission is granted to anyone to use this software for any purpose.3. 2005 Copyright © 1995-2005 Jean-loup Gailly and Mark Adler This software is provided 'as-is'.caltech. OR CONSEQUENTIAL DAMAGES (INCLUDING. you must not claim that you wrote the original software.SQL Reference Manual All rights reserved. BUT NOT LIMITED TO. an acknowledgment in the product documentation would be appreciated but is not required.org Mark Adler madler@alumni. WHETHER IN CONTRACT. Redistribution and use in source and binary forms. and must not be misrepresented as being the original software. In no event will the authors be held liable for any damages arising from the use of this software. 2 Redistributions in binary form must reproduce the above copyright notice.zlib. including commercial applications.

............................................................................................................................. 62 Binary Operators ................................................................................................................................................................ 56 String Constants (Dollar-Quoted)................................. 68 NULL Operators ....................................................... 69 -xxiii- ............................................................................................................................................................................. 57 String Constants (Standard) ........................................................................................ 65 Comparison Operators ......................................................................................................................................................................................... 51 Keywords ............................................... 35 Printing Full Books ............................................................................................................................................................................................... 55 Numeric Constants ............................ 55 Constants ............................................. 39 Typographical Conventions ........................ 58 Operators .................................................................................................................................................................................................................................................................................................................................................... 62 Boolean Operators ......................... 36 Suggested Reading Paths .......................... 65 Data Type Coercion Operators (CAST)...... 40 Preface SQL Overview System Limits SQL Language Elements 43 45 49 51 Keywords and Reserved Words .................................................................................................. 57 Date/Time Constants .......................................................................... 51 Reserved Words ........................................................................................................................................................................................................................... 36 Where to Find Additional Information ................................................... 35 Reading the Online Documentation .......................................................................................................Contents Contents Technical Support About the Documentation 33 35 Where to Find the Vertica Documentation ......................................................................................................................................................................... 54 Identifiers...................................................... 67 Mathematical Operators............................................................................................................... 66 Date/Time Operators ...........................................................................................................................................................................................................

................ 81 IN-predicate ................................... 75 Date/Time Expressions ................................................................. 96 DATE ........................................................................................................................... 89 Boolean Data Type ............................................................................................................................................................ 120 MIN ......................... 94 Date/Time Data Types ......................................................... 80 column-value-predicate ............ 124 VARIANCE ............................................................................................... 78 Numeric Expressions ........................................................................................................................................................................................................................................................................................................................ 105 INTEGER .................................................................................................................................. 124 -xxiv- .................................................... 120 STDDEV ............................................................ 119 MAX.......................................SQL Reference Manual String Concatenation Operators ............................................................................................................................................................................................................................................................................................ 112 BIT_AND ............................................................................................................................................................................. 107 NUMERIC ................................ 107 SQL Functions 111 Aggregate Functions ...................................................................................................................................................................................................................................................... 121 STDDEV_SAMP . 115 COUNT ....................................................................................................................................................... 96 TIME ........................................................... 70 Expressions ................................................................................................................... 79 BETWEEN-predicate .......................................................................................................................................... 123 VAR_SAMP ............................................... 86 Search Conditions ................................................................................................................................................... 123 VAR_POP................... 83 LIKE-predicate...................................................... 113 BIT_OR .............................................. 122 SUM_FLOAT ....................................................................................................................................................................... 86 SQL Data Types 89 Binary Data Types ............................................................. 114 BIT_XOR .................. 103 DOUBLE PRECISION (FLOAT) ................................................................................................................................................................................................. 116 COUNT(*) .................................................................................................................................................................................................................. 97 TIMESTAMP ......................................................................... 78 Predicates........................................................................................................................................................................................................................................................ 82 join-predicate ........................................................................................................................................................... 122 SUM ..................................................................................................................................................................................................... 79 Boolean-predicate....................................................................................................................................... 76 NULL Value ........... 70 Aggregate Expressions ........ 72 CASE Expressions .................................................................................................................................................................................................................................................................................................................................................. 93 Character Data Types ..................................................................................................................................................................................................................................................................... 74 Comments ......................................... 84 NULL-predicate ............................................ 121 STDDEV_POP................................. 73 Column References ................................................................................................................................................................. 112 AVG ......................................................................................................................................................................................................................................................................................................................................................................................................................................................................... 99 INTERVAL ....................................................................................................................................................................................................................................... 102 Numeric Data Types......................................................................

....................................................................................................................................................................................................... 135 ROW_NUMBER ................................................................................................................................................................................................................................................. 156 LOCALTIMESTAMP .................................................................................................................................................................................................................................................................................................. 159 STATEMENT_TIMESTAMP ......................................................................................................................................................................................................................................................... 176 ABS ............................... 165 TRANSACTION_TIMESTAMP ........ 139 ADD_MONTHS ........................................................................................................................................................................................................................................................................................ 153 GETUTCDATE ................................................................................................................................................................................................................... 157 NOW ........................ 178 COS .................... 174 Mathematical Functions ...................................................................................................................................................................................................... 181 LN ................................................................................................... 143 CURRENT_DATE ...........Contents Analytic Functions ......................... 155 LOCALTIME ............................................................................................................................................................ 179 EXP ....................................................................................................... 140 AGE.......................................................... 160 SYSDATE ............................................................ 156 MONTHS_BETWEEN ........................................................................................................................... 146 DATEDIFF ........................... 161 TIMEOFDAY ............................................................................................... 169 TO_NUMBER ......................................................................................................................................................... 128 LEAD / LAG............................................................................................................................................ 143 CURRENT_TIME........................................................................................................................................................................................................................................................................................................................ 125 FIRST_VALUE / LAST_VALUE ....................................... 171 Template Patterns for Numeric Formatting.......................................................................................................................................................................................................... 153 GETDATE .................................. 166 TO_CHAR ...... 180 FLOOR ............................................................................. 154 ISFINITE ........................................................................................................................................... 181 LOG............................................................................................................................................................................. 180 HASH ....................................................... 182 -xxv- .................................................................................................... 170 Template Patterns for Date/Time Formatting........................................ 179 DEGREES ................................................................................................................................................................................................................................. 168 TO_TIMESTAMP..................................................................................................... 154 LAST_DAY ................................................................................................................................................................... 178 CEILING (CEIL) ........................................................................................................................................................................................................................ 131 RANK / DENSE_RANK ............................ 176 ACOS ... 147 EXTRACT ................................................................................................................................................ 166 TO_DATE .................................................................................................................................................................................................................................... 144 CURRENT_TIMESTAMP ..................................................................... 178 COT ...................................... 176 ASIN........................................................................................................................................... 177 ATAN ........................................................................ 145 DATE_TRUNC................................................... 159 OVERLAPS ......................................................................................................................................................................................... 177 CBRT ................. 165 Formatting Functions ........................ 177 ATAN2 .............................................................................................................. 144 DATE_PART ............................................................................................................... 137 Date/Time Functions .. 161 TIME_SLICE ...................................................................................................................................................................................................................................................................................................................................................................................................................... 142 CLOCK_TIMESTAMP ...........................................................................

............ 191 ISNULL ........................................ 188 WIDTH_BUCKET ........................................................................................................................................................................................................................... 220 SPLIT_PART ......................................................................................................................................................................................................................................................................................................................................... 207 INSTR ............................................................................. 187 SIN .............. 186 SIGN............................................................................................................................................... 197 ASCII ....................... 184 RADIANS.................................................................................................................. 204 INET_ATON .................................................................................................. 183 PI ................ 211 LPAD .......................................... 201 CLIENT_ENCODING .......................... 188 TAN................................................................................................................ 216 QUOTE_LITERAL .......................................................................................................................................................................................... 218 RIGHT ......................................................................................................................SQL Reference Manual MOD..................................... 205 INET_NTOA ......................................................................................................................................................................................................................................................................................................................... 191 COALESCE ............................................................................................................................................................................................................................................................................................................................................ 182 MODULARHASH ................................................................................................................................................. 185 RANDOMINT ............................................................... 207 LEAST.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. 215 QUOTE_IDENT .......... 187 SQRT ................................................................................................................................................................................................. 195 String Functions ................................................................................................................ 219 RPAD ....................................................................................... 213 OCTET_LENGTH ..................... 200 CHR..................................................................................................................................................................... 217 REPLACE.... 198 BITCOUNT ........................ 184 RANDOM.................................................................................................................................................................................. 192 NULLIF ........................................................ 206 INITCAP ....................................................................................................................................................................................................... 214 POSITION ......................................................................... 201 DECODE ......................... 209 LEFT ........................................................................................................................ 210 LENGTH ............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. 220 RTRIM .................................... 198 BITSTRING_TO_BINARY ............................................................................................................................................. 200 CHARACTER_LENGTH ............................. 202 GREATEST ........................................................................................................................................................................................................................................................... 203 HEX_TO_BINARY ........................... 221 -xxvi- ...... 212 LTRIM.............. 199 BTRIM ............................................................................................................................................................................................................................. 214 OVERLAY ................................................................................................................................................................................................................ 197 BIT_LENGTH ................................................................................ 185 ROUND .................................................................................................................................................................................................................................................................................................................................... 189 NULL-handling Functions............ 213 MD5 ........................................................................................................................................................... 193 NVL2 ........................................... 188 TRUNC............. 217 REPEAT ................................................... 211 LOWER ............................................................................................................ 193 NVL.......................................................................................................................................... 184 POWER ..............................................................................................................

................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. 236 USER .................................................... 230 V6_SUBNETN ............................................... 234 CURRENT_DATABASE ............................. 240 ANALYZE_CONSTRAINTS .................................................................................................................... 240 ALTER_LOCATION_USE ............................................................................................................................... 238 ADD_LOCATION ............................................................................................................................................................................................................................................................................. 227 V6_ATON ..................... 228 V6_NTOA .......Contents STRPOS .................................................... 270 DUMP_PROJECTION_PARTITION_KEYS .................................. 250 CLOSE_ALL_SESSIONS .......................................................................... 269 DUMP_LOCKTABLE ... 248 CANCEL_REFRESH ................................. 259 DISABLE_DUPLICATE_KEY_ERROR .............................................................................................................................. 239 ADVANCE_EPOCH................................................. 255 CREATE_DESIGN_CONTEXT ................................................................................................... 224 TO_HEX................................................................................................................................................................................................................................................................................................................................................................................................................................................................. 222 SUBSTR ............. 261 DISPLAY_LICENSE ............................................................................................... 257 DEPLOY_DESIGN................................................................................................................................................................................................................. 235 SESSION_USER ........................................ 230 V6_TYPE ...................................................................................................................... 235 HAS_TABLE_PRIVILEGE ........................................................................... 249 CLEAR_DESIGN_TABLES ..................................................... 252 CONFIGURE_DEPLOYMENT .......................................................................................................................................................................... 237 VERSION ..................... 253 CREATE_DESIGN .................................................... 271 EXPORT_CATALOG........................................................... 256 CREATE_DESIGN_QUERIES_TABLE ................................................................................................................................................................................................................................................ 271 -xxvii- ....................................................................................................................................................................... 234 CURRENT_USER . 263 DO_TM_TASK......................................... 263 DROP_LOCATION ........................................................................................... 249 CLEAR_QUERY_REPOSITORY ............................................................................................................................................................................ 270 DUMP_PARTITION_KEYS ................................................................................................................................................................................................................. 237 Vertica Functions ............................................................................................... 250 CLOSE_SESSION .............. 229 V6_SUBNETA ..................... 266 DUMP_CATALOG ........................................................................... 223 TO_BITSTRING................................................................................................................................................. 226 TRIM ........................ 270 DUMP_TABLE_PARTITION_KEYS................................................................................ 241 ANALYZE_STATISTICS......................... 225 TRANSLATE ........................................................................................................................................................................................................................................... 231 System Information Functions ........................................................................................................................................................... 247 CANCEL_DEPLOYMENT...................................................... 238 ADD_DESIGN_TABLES ............................................. 226 UPPER........................ 257 CREATE_PROJECTION_DESIGN .............................................................................................................................................................. 234 CURRENT_SCHEMA ................................................................................................................................................. 265 DROP_PARTITION ...................................................................................................................................................................................................................................................................... 248 CLEAR_DESIGN_SEGMENTATION_TABLE.... 255 CREATE_DESIGN_CONFIGURATION ....................................................................................................................................................................................................................................................................................................................... 222 SUBSTRING .................

............................................... 283 MARK_DESIGN_KSAFE ............................................................................................................................................................................. 290 READ_DATA_STATISTICS..................................................................................................................................................................................................................................................................................................... 295 RUN_DEPLOYMENT ......................................... 285 MEASURE_LOCATION_PERFORMANCE .............................................................................................................................................................................................................................................................................................. 293 RESTORE_LOCATION ......... 276 GET_NUM_REJECTED_ROWS ........................................................................................................................................... 287 PARTITION_PROJECTION .................................. 276 GET_NUM_ACCEPTED_ROWS ......................................................... 296 SAVE_DESIGN_VERSION ................................................... 276 GET_PROJECTION_STATUS ................................ 309 TEMP_DESIGN_SCRIPT ........................................................................................................................................................................................................................................................... GET_TABLE_PROJECTIONS .............................. 301 SET_DESIGN_LOG_FILE ............................................................................................................................................................................................... 274 GET_AHM_TIME ............... 297 SAVE_QUERY_REPOSITORY ........................................... 303 SET_DESIGN_QUERIES_TABLE ................. 272 EXPORT_DESIGN_TABLES........................................................................................ 302 SET_DESIGN_LOG_LEVEL .......... 277 GET_PROJECTIONS...................................................................................... 298 SET_AHM_TIME ..................................................................................................................... 291 REENABLE_DUPLICATE_KEY_ERROR .......................... 275 GET_DESIGN_SCRIPT ................................................................................................................................................................................................................... 289 PURGE ....................................................................................................................... 292 REMOVE_DEPLOYMENT_ENTRY ............................................................................................................................................. 306 SET_LOCATION_PERFORMANCE .............................................................................. 294 REVERT_DEPLOYMENT ............................................................................................................................................................................................ 279 LOAD_DATA_STATISTICS ........................................................ 306 SET_DESIGN_TABLE_ROWS ................................... 290 PURGE_PROJECTION ............................................................................. 292 REMOVE_DESIGN................................................ 303 SET_DESIGN_QUERY_CLUSTER_LEVEL ............................... 309 UPDATE_DESIGN......................................................................................................... 298 SET_AHM_EPOCH................... 288 PARTITION_TABLE ...................................................................................................................................................... 282 LOAD_DESIGN_QUERIES ............................................................................................................................................................................................................................................................................................................................................................ 274 GET_CURRENT_EPOCH .................................................................................................................................................................... 297 SELECT CURRENT_SCHEMA ........... 273 GET_AHM_EPOCH ........ 290 PURGE_TABLE ................................ 307 START_REFRESH ....................... 305 SET_DESIGN_SEGMENTATION_TABLE . 293 RESET_DESIGN_QUERIES_TABLE ...... 300 SET_DESIGN_KSAFETY ..................................................................................................................................................................................................................... 282 MAKE_AHM_NOW. 292 REMOVE_DESIGN_CONTEXT ................................... 275 GET_LAST_GOOD_EPOCH ................... 304 SET_DESIGN_SEGMENTATION_COLUMN ................................................................................................................................................................................ 294 RETIRE_LOCATION ...................................................................... 277 IMPLEMENT_TEMP_DESIGN ........................................................................................................ 279 INTERRUPT_STATEMENT ...............................................................................................................................................SQL Reference Manual EXPORT_DESIGN_CONFIGURATION ................................................ 308 SYNC_CURRENT_DESIGN ...................................... 302 SET_DESIGN_PARAMETER ............................................................................................. 286 MERGE_PARTITIONS .................................................................................................................................................................................. 310 -xxviii- ....................................................................................................................................................................................................................... 273 EXPORT_STATISTICS.................................................................

................................................................................................. 356 DELETE .................. 390 ORDER BY Clause ................................................................................... 380 SELECT ................................................................................................... 394 SET .......... 322 COPY ................. 360 DROP TABLE ............ 370 GRANT (Table) ..................................................................................... 393 OFFSET Clause ............................................................................................ 349 CREATE TEMPORARY TABLE ..................................................... 360 DROP SCHEMA ................................................................................................................ 388 HAVING Clause .......................................................................................................... 314 ALTER TABLE ................................................... 379 ROLLBACK TO SAVEPOINT ......................................... 355 CREATE VIEW.............................................................................................................................................................. 382 FROM Clause ..................................... 323 CREATE PROJECTION................ 346 column-definition ........................................... 374 REVOKE (Schema) ............................................................................................................................................................... 377 REVOKE (View) .........................................................................................................................................................................................................................................................................................................................Contents WAIT_DEPLOYMENT ..................................................................... 344 CREATE TABLE ...................................................................................................................................................................................................................... 348 column-constraint ........................................................... 358 DROP PROJECTION ............. 374 RELEASE SAVEPOINT ......................................................................................................................................... 377 ROLLBACK ............................................................................................................................................................................................................... 373 LCOPY .................................................................................................................................................................................................................................................................................................................................... 386 GROUP BY Clause ............................................................................................................................................................................. 384 WHERE Clause............................................................................................................................................................................................................................ 314 ALTER SCHEMA ......................................................................................................................... 364 DROP VIEW ........................................................................................................................................................................................................................................................................................... 364 EXPLAIN ............................................................................................................................................................... 365 GRANT (Schema).......................................................................................................................................................................................................................................................................................................................................................... 391 LIMIT Clause.................................................................................................. 394 DATESTYLE ................................... 376 REVOKE (Table)................................................................................................................................. 397 -xxix- ......................................................................................... 372 INSERT ....................................................................................................................................................................... 321 COMMIT.......... 351 CREATE USER ........................................................................................................................................................................................................................................................... 319 ALTER USER ....................................................................................... 337 encoding-type ............................................................................... 343 CREATE SCHEMA.............. 311 SQL Statements 313 ALTER PROJECTION ................................................................................................................................................................................................................................. 340 hash-segmentation-clause .............................................................................................................................................................. 341 range-segmentation-clause ................................................................................................................................ 362 DROP USER .............................................................................................................. 371 GRANT (View) ............................................................................................. 396 SEARCH_PATH......................................................................................................................... 316 table-constraint ........................................................... 379 SAVEPOINT .................................................................................................................................................................................................................................................................................................................................

.................... 452 RESOURCE_USAGE ..................................................................................................................... 457 SESSIONS ....................................................................................................................................................................................................................... 467 VT_ACTIVE_EVENTS ............... 404 UPDATE .. 463 Deprecated System Tables ........................................................................................................... 415 PROJECTIONS...................................................................................................... 402 SHOW SEARCH_PATH ................................................................................................................................................................................................................................. 403 UNION .................................................................................................................................................................................................................................. 402 TRUNCATE TABLE ....................... 462 WOS_CONTAINER_STORAGE ................................................................................................................................................................................................................................................................................ 441 NODE_RESOURCES ........................................................................................................................................................................................................................................................................................ 448 QUERY_METRICS .................................................................................................................................................................................................................................. 398 TIME ZONE ........... 423 V_MONITOR Schema ........ 418 TABLES ...................................................................................................................................................... 435 EXECUTION_ENGINE_PROFILES ............................................................................. 424 ACTIVE_EVENTS ................................................ 411 FOREIGN_KEYS ................................... 461 TUPLE_MOVER_OPERATIONS............ 437 LOAD_STREAMS ................................................. 458 STORAGE_CONTAINERS ................................................................................................................................................................................................................................................SQL Reference Manual SESSION CHARACTERISTICS................................................................ 424 COLUMN_STORAGE ......................................................... 436 HOST_RESOURCES .. 441 LOCKS ..................................................................................... 413 PRIMARY_KEYS ............ 449 QUERY_PROFILES ..................................................................... 450 RESOURCE_REJECTIONS ...... 420 USERS....................................................................................................................................................................................................................................... 460 SYSTEM .. 422 SYSTEM_TABLES ................ 399 SHOW ............................................................................................................................................................................................................................................................................................................................................ 408 SQL System Tables (Monitoring APIs) 409 V_CATALOG Schema .... 469 -xxx- .............................................................................................................................................................................................................................................................................................................................................................................................................................................................. 421 VIEWS ....................................... 469 VT_CURRENT_SESSION ........................................................................................................................................................................................ 412 GRANTS .................................... 468 VT_COLUMN_STORAGE ............................. 446 PROJECTION_STORAGE .................................................... 432 EVENT_CONFIGURATIONS .................................................................... 440 LOCAL_NODES ..................................... 428 DISK_RESOURCE_REJECTIONS..................................................................................................................................................................................................................................................................................................................................................................... 445 PROJECTION_REFRESHES .......................................................................................................................................................................................................................................................................................................................................................................................................................................... 421 VIEW_COLUMNS .................................................................................................................................................................................................................................................. 444 PARTITIONS ................................................................. 411 COLUMNS ........................................................................................................................................................... 454 SESSION_PROFILES ...................... 416 TABLE_CONSTRAINTS ....... 419 TYPES ............................................................ 426 CURRENT_SESSION ................................................................................................................ 431 DISK_STORAGE ...

................................................................. 492 VT_TABLE_STORAGE .......................................................................................................................................... 493 VT_TUPLE_MOVER .............................................................................................................................................. 480 VT_PROJECTION................................................................................................................................................................................................................Contents VT_DISK_RESOURCE_REJECTIONS....................................................................................................................................................................................... 490 VT_SYSTEM ................................................................................................................................................................................................... 482 VT_PROJECTION_STORAGE ......... 489 VT_SESSION_PROFILING ................................................................. 491 VT_TABLE ................................................................................................. 484 VT_QUERY_METRICS ..................................... 493 VT_VIEW........................... 494 VT_WOS_STORAGE ................................................................................................................................. 488 VT_SESSION ................................................... 472 VT_EE_PROFILING .... 485 VT_RESOURCE_REJECTIONS ............................................................................................................................................................................................. 487 VT_SCHEMA........................................................................................................................................................... 479 VT_PARTITIONS .................................................. 473 VT_GRANT ................................................................................................................................. 494 Index 497 -xxxi- ................................... 481 VT_PROJECTION_REFRESH ... 474 VT_LOAD_STREAMS............................................................................................................ 484 VT_QUERY_PROFILING ............... 486 VT_RESOURCE_USAGE ...................................... 472 VT_DISK_STORAGE ....................................................................................................................................................................................................................................................................................................................................................................... 476 VT_LOCK .. 476 VT_NODE_INFO .................................................................................................................................................................................................

.

You can also email verticahelp@vertica.zip file.com. Before reporting a problem. and suggestions.com/support). use the Technical Support page on the Vertica Systems. Note: You must be a registered user in order to access the support page.vertica.Technical Support To submit problem reports. -33- . questions. 1 2 Go to http://www. Click My Support.com/support (http://www. Web site.vertica. comments. Inc. run the Diagnostics Utility described in the Troubleshooting Guide and attach the resulting .

.

image rendering.tar. Web site's Product Documentation Page http://www. The Vertica documentation has been tested on the following browsers: • • Internet Explorer 7 FireFox Please report any script. It also includes suggested reading paths (page 36). You need a V-Zone login to access the documentation. Inc. make sure The Selected Frame is checked. If you find broken links. refer to the documentation on your database server or client systems. The documentation is available as an rpm (which you can install on the database server system in the directory: /opt/vertica/doc). or .gz. -35- . Note: The Vertica documentation contains links to Web sites of other companies or organizations that Vertica does not own or control. please let us know. select File > Print. Inc. or text formatting problems to Technical Support (on page 33). Where to Find the Vertica Documentation You can read or download the Vertica documentation for the current release of Vertica® Analytic Database from the Vertica Systems. Web site is updated each time a new release is issued. See Installing Vertica Documentation.zip file. The HTML files require only a browser that displays frames properly with JavaScript enabled. Reading the Online Documentation Reading the HTML Documentation Files The Vertica documentation files are provided in HTML browser format for platform independence. From the menu bar.com/v-zone/product_documentation. If you are using an older version. Printing the HTML Documentation Files Mozilla Firefox 1 2 3 Right-click the frame containing the information you want to print. . Note: The documentation on the Vertica Systems. You must be a registered user to access this page. In the Print Frames window. and click OK.About the Documentation This section describes how to access and print Vertica documentation.vertica. The HTML files do not require a Web (HTTP) server.

• • • • • • • • Release Notes Concepts Guide Installation and Configuration Guide Getting Started Guide Administrator's Guide Programmer's Guide SQL Reference Manual Troubleshooting Guide Suggested Reading Paths This section provides a suggested reading path for various users. you can right-click inside the source frame and select This Frame > Print Frame > OK from the submenu. Under the General tab. You can download the latest version of the free Acrobat Reader from the Adobe Web site (http://www. Because each topic starts a new page. click Selection from the Page Range window and click Print.html). Printing Full Books Vertica also publishes books as Adobe Acrobat™ PDF. Note: Vertica manuals are topic driven and not meant to be read in a linear fashion. and there are blank pages between each topic. the PDFs do not resemble the format of typical books.SQL Reference Manual Note: In later versions of Firefox. From the menu bar. Internet Explorer 7 1 2 3 Drag select the information you want to print. The following list provides links to the PDFs. Therefore.com/products/acrobat/readstep2. including new features and behavior changes to the product and documentation Concepts Guide — Basic concepts critical to understanding Vertica Getting Started Guide — Step-by-step guide to getting Vertica up and running System Administrators • Installation and Configuration Guide — Platform configuration and software installation -36- .adobe. All Users • • • Release Notes — Release-specific information. The books are designed to be printed on standard 8½ x 11 paper using full duplex (two-sided printing). Open and print the PDF documents using the Adobe Reader. select File > Print. Vertica recommends that you read the manuals listed under All Users first. some of the pages are very short.

transactions. and maintenance Troubleshooting Guide — General troubleshooting information Application Developers • • • Programmer's Guide — Connecting to a database. queries.About the Documentation • • Release Notes — Release-specific information. loading. and so on SQL Reference Manual — SQL and Vertica-specific language information Troubleshooting Guide — General troubleshooting information -37- . security. including new features and behavior changes to the product and documentation Troubleshooting Guide — General troubleshooting information Database Administrators • • • Installation and Configuration Guide — Platform configuration and software installation Administrator's Guide — Database configuration.

.

tips.Where to Find Additional Information Visit the Vertica Systems. Web site (http://www.com) to keep up to date with: • • • • Downloads Frequently Asked Questions (FAQs) Discussion forums News. Inc. and techniques -39- .vertica.

SELECT is the same as Select. radio buttons.log. which is the same as select. such as tables. Indicates user-supplied information in interactive or programmatic input/output. such as menu command buttons. the alternatives are enclosed in curly braces. such as a special menu command. "Press Enter. Do not type the square brackets." Bold Code Database objects Emphasis monospace monospace italics UPPERCASE User input Press Syntax Convention Alternatives { } Description When precisely one of the options must be chosen. Indicates the name of a SQL command or keyword. for example. Indicates that the reader should click options. for example: QUOTES { ON | OFF } It indicates that exactly one of ON or OFF must be provided. Backslash \ Brackets [ ] -40- . for example. vertica. Indicate optional items. are shown in san-serif type.You do not type the braces. Indicates literal interactive or programmatic input/output. for example. Continuation character used to indicate text that is too long to fit on a single line. Names of database objects. "Click OK to proceed. and mouse selections. CREATE TABLE [schema_name. Typographical Convention Description Indicates areas of emphasis. Text entered by the user is shown in bold san serif type." Indicates that the reader perform some action on the keyboard. SQL keywords are case insensitive. SQL and program code displays in a monospaced (fixed-width) font.40 Typographical Conventions The following are the typographical and syntatax conventions used in the Vertica documentation. Indicates emphasis and the titles of other documents or system files. usually from a drop-down menu. implicit on all user input that includes text Right-angle bracket > Click Indicates a flow of events.]table_name The brackets indicate that the schema_name is optional. indicates the Return/Enter key. for example.

. Indentation Placeholders Vertical bar | Vertical ellipses -41- . Is an attempt to maximize readability.] means that you can enter multiple. DESC.. SQL is a free-form language. Indicate a repetition of the previous parameter. Items that must be replaced with appropriate identifiers or expressions are shown in italics. When none or only one of a list of items must be chosen. option[... Indicate an optional sequence of similar items or that part of the text has been omitted for readability. You do not type the square brackets. for example: [ ASC | DESC ] The option indicates that you can choose one of ASC. the items are separated by vertical bars (also called a pipe) with the list enclosed in square brackets. or neither. comma-separated options.About the Documentation Ellipses .. For example.

.

Preface This document provides a reference description of the Vertica SQL database language. Audience This document is intended for anyone who uses Vertica. It assumes that you are familiar with the basic concepts and terminology of the SQL language and relational database management systems. -43- .

.

. For Oracle SQL documentation.. SET DATESTYLE chooses the format in which date and time values display. You can control how much historical data is stored on disk.. you need a web account to access the library.au/SQL/sql-99. CREATE/DROP/ALTER PROJECTION manipulate projections.html) Vertica Major Extensions to SQL Vertica provides several extensions to SQL that allow you to use the unique aspects of its column store architecture: • • • • • • • • • • AT EPOCH LATEST SELECT.SQL Overview SQL (Structured Query Language) is a widely-used. Vertica SQL expands and eventually converges with ANSI SQL 99. Support for Historical Queries Unlike most databases.com/pls/db102/homepage). The UPDATE (page 408) command performs an INSERT and a DELETE. -45- . industry standard data definition and data manipulation language for relational databases. CONSTRAINT . See Oracle Database 10g Documentation Library (http://www.. SET SEARCH_PATH specifies the order in which Vertica searches through multiple schemas when a SQL statement contains an unqualified table name. which the Database Designer can use to produce more efficient projections. SET TIME ZONE specifies the TIME ZONE run-time parameter for the current session. runs historical queries against a snapshot of the database a specific date and time. runs a SQL query in snapshot isolation mode in which Vertica does not hold locks or block other processes.oracle. use a semicolon to end a statement or combine multiple statements. COPY is used for bulk loading data. Vertica Support for ANSI SQL Standards Vertica SQL supports a subset of ANSI SQL 99.bnf. AT TIME 'timestamp' SELECT. It reads data from a text file and inserts tuples into the WOS (Write Optimized Store) or directly into the ROS (Read Optimized Store).. See BNF Grammar for SQL-99 (http://savage. it simply marks records as deleted. Vertica SQL follows Oracle whenever possible.. such as data loads. the DELETE (page 358) command in Vertica does not delete data. This behavior is necessary for historical queries. Over time. CREATE PROJECTION commands are generated for you by the Database Designer.net. CORRELATION (column) REFERENCES (column) captures Functional Dependencies. SHOW displays run-time parameters for the current session. Note: In Vertica. as described in the Administrator's Guide. SELECT Vertica <Function> executes special Vertica functions (page 238). Non-standard Syntax and Semantics In case of non-standard SQL syntax or semantics.

SQL Reference Manual Joins Vertica supports typical data warehousing query joins. inner joins. For details. you can change the isolation level for the database or individual transactions. Isolation Level READ COMMITTED Epoch Used Last epoch for reads and current epoch for writes Current epoch for reads and writes Dirty Read Not Possible Non Repeatable Read Possible Phantom Read Possible SERIALIZABLE Not Possible Not Possible Not Possible Implementation Details Vertica supports conventional SQL transactions with standard ACID properties: • • • Vertica supports ANSI SQL 92 style-implicit transactions. the isolation level Vertica uses could be more strict than the isolation level you choose. SERIALIZABLE Isolation and READ COMMITTED Isolation. Transactions Session-scoped isolation levels determine transaction characteristics for transactions within a specific user session. REPEATABLE READ. Vertica automatically translates READ UNCOMMITTED to READ COMMITTED and REPEATABLE READ to SERIALIZABLE. such as joins on columns with primary key-foreign key relationships. You set them through the SET SESSION CHARACTERISTICS (page 398) command. See Changing Transaction Isolation Levels. Therefore. -46- . The following table highlights the behaviors of READ COMMITTED and SERIALIZABLE isolation. they determine what data a transaction can access when other transactions are running concurrently. Vertica recommends that you COMMIT (page 322) or ROLLBACK (page 379) the current transaction before using COPY. READ COMMITTED. and some outer joins with restrictions based on the physical schema. You do not need to execute a BEGIN or START TRANSACTION command. By default. For specific information see. The COPY (page 323) command automatically commits itself and any current transaction (except when loading temp tables). Vertica uses the SERIALIZABLE isolation level. However. Vertica does not use a redo/undo log or two-phase commit. Although the Vertica query parser understands all four standard SQL isolation levels (READ UNCOMMITTED. internally Vertica uses only READ COMMITTED and SERIALIZABLE. see Joins in the Programmer's Guide. and SERIALIZABLE) for a user session. Specifically.

Vertica uses ROLLBACK messages to indicate this type of error. use the ROLLBACK (page 379) statement. This marker allows the next statement. A rollback can be done automatically in response to an error or through an explicit rollback transaction. the marker automatically rolls forward. the result of the subroutine could be rolled back if necessary. Savepoints Vertica supports using savepoints. use explicit savepoints. Vertica supports transaction-level and statement-level rollbacks. it releases any locks that the transaction might have held. Vertica uses ERROR messages to indicate this type of error. and resource constraints result in transaction-level rollback. DDL errors.SQL Overview Automatic Rollback A rollback reverts data in a database to an earlier state by discarding any changes to the database state that have been performed by a transaction's statements. Savepoints are useful when creating nested transactions. the RELEASE SAVEPOINT (page 374) statement to destroy it. To implement automatic. dead locks. or the ROLLBACK TO SAVEPOINT (page 379) statement to roll back all operations that occur within the transaction after the savepoint was established. systemic failures. This restores the transaction to the state it was in at the point in which the savepoint was established. A transaction-level rollback discards all modifications made by a transaction. Implicit savepoints are available to Vertica only and cannot be referenced directly. To explicitly roll back individual statements. Most errors caused by a statement result in a statement-level rollback to undo the effects of the erroneous statement. For example. If the statement is successful. A savepoint is a special mark inside a transaction that allows all commands that are executed after it was established to be rolled back. and only the next statement. statement-level rollbacks in response to errors. to be rolled back if it results in an error. That way. To explicitly roll back an entire transaction. Use the SAVEPOINT (page 380) statement to establish a savepoint. -47- . a savepoint could be created at the beginning of a subroutine. In addition. Vertica automatically inserts an implicit savepoint after each successful statement one at a time. A statement-level rollback undoes just the effects made by a particular statement.

.

typically 1024 1600 2^64. Item Database size Table size Row size Limit Approximates the number of files times the file size on a platform. The row size is approximately the sum of its maximum column sizes. column names. typically 1024 Limited by physical RAM of a single node (or threads per process). as the catalog must fit in memory Limited by physical RAM (or threads per process).System Limits This section describes system limits on the size and number of objects in a Vertica database. Key size Number of tables/projections per database Number of concurrent connections per node Number of concurrent connections per cluster Number of columns per table Number of rows per load Number of partitions 1600 x 4000 Limited by physical RAM. Unlimited in FROM or WHERE or HAVING clause -49- . Length for a fixed-length column Length for a variable-length column Length of basic names Depth of nesting subqueries 65000 bytes 65000 bytes 128 bytes. Vertica recommends a maximum of 20 partitions. create no more than 12. or 2^63 bytes per column. whichever is smaller 256 Note: The maximum number of partitions varies with the number of columns in the table. for example a varchar(80) has a maximum size of 80 bytes. etc. whichever is smaller 8MB. or 2^63 bytes per column. Basic names include table names. Ideally. In most cases. as well as system RAM. depending on the maximum disk configuration 2^64 rows per node. computer memory and disk drive are the limiting factors. where.

.

Some SQL keywords are also reserved words that cannot be used in an identifier unless enclosed in double quote (") characters. Although SQL is not case-sensitive with respect to keywords. they are generally shown in uppercase letters throughout this documentation for readability purposes.SQL Language Elements This chapter presents detailed descriptions of the language elements and conventions of Vertica SQL. Keywords ABORT AFTER ANALYSE AS AUTHORIZATIO N BACKWARD BINARY BOTH CACHE CATALOGPATH CHECK COALESCE COMMITTED CONVERT CREATEUSER CURRENT_TIM E CYCLE DATA DEC DATABASE DECIMAL DATAPATH DECLARE DAY DEFAULT DEALLOCATE DEFAULTS BEFORE BIT BY CALLED CHAIN CHECKPOINT COLLATE COMMONDEL TA_COMP COPY CROSS CURRENT_TI MESTAMP CASCADE CHAR CLASS COLUMN CONSTRAINT CORRELATION CSV CURRENT_USER CASE CHARACTER CLOSE COMMENT CONSTRAINTS CREATE CURRENT_DATA BASE CURRENT_SCHE MA CAST CHARACTERISTIC S CLUSTER COMMIT CONVERSION CREATEDB CURRENT_DATE CURSOR BEGIN BLOCK_DICT BETWEEN BLOCKDICT_CO MP BIGINT BOOLEAN ABSOLUTE AGGREGATE ANALYZE ASC ACCESS ALL AND ASSERTION ACTION ALSO ANY ASSIGNMENT ADD ALTER ARRAY AT -51- . Keywords and Reserved Words Keywords are words that have a specific meaning in the SQL language.

SQL Reference Manual DEFERRABLE DELIMITERS DETERMINES DOMAIN EACH EPOCH EXCLUDING EXTERNAL FALSE FOR FROM GLOBAL HANDLER ILIKE INCLUDING INNER INSTEAD INTO JOIN KEY LANCOMPILER LEADING LIMIT LOCALTIMESTA MP MATCH MOBUF MULTIALGORIT HM_COMP NAMES NEXT NODES NOTNULL DEFERRED DELTARANGE _COMP DIRECT DOUBLE ELSE ERROR EXCLUSIVE EXTRACT FETCH FORCE FULL GRANT HAVING IMMEDIATE INCREMENT INOUT INT INVOKER DEFINER DELTARANGE_C OMP_SP DISTINCT DROP ENCODING ESCAPE EXECUTE DELETE DELTAVAL DISTVALINDEX DELIMITER DESC DO ENCRYPTED EXCEPT EXISTS END EXCEPTIONS EXPLAIN FILLER FOREIGN FUNCTION GROUP HOLD IMMUTABLE INDEX INPUT INTEGER IS FIRST FORWARD FLOAT FREEZE HOUR IMPLICIT INHERITS INSENSITIVE INTERSECT ISNULL IN INITIALLY INSERT INTERVAL ISOLATION LANGUAGE LEFT LISTEN LOCATION MAXVALUE MODE MULTIALGORI THM_COMP_ SP NATIONAL NO NONE NOWAIT LARGE LESS LOAD LOCK MERGEOUT MONTH LAST LEVEL LOCAL LATEST LIKE LOCALTIME MINUTE MOVE MINVALUE MOVEOUT NATURAL NOCREATEDB NOT NULL NCHAR NOCREATEUSER NOTHING NULLIF NEW NODE NOTIFY NUMERIC -52- .

SQL Language Elements OBJECT OLD OR OVERLAY PARTIAL PRECISION PRIVILEGES QUOTE READ REFERENCES RELEASE RESTART RLE SAVEPOINT SEGMENTED SESSION_USE R SIMILAR STABLE STDOUT SYSID TABLE TERMINATOR TIMESTAMPTZ TRANSACTION TRUNCATE UNCOMMITTED UNLISTEN USER VACUUM VARCHAR WHEN OF ON ORDER OWNER PASSWORD PREPARE PROCEDURA L OFF ONLY OUT OFFSET OPERATOR OUTER OIDS OPTION OVERLAPS PINNED PRESERVE PROCEDURE PLACING PRIMARY PROJECTION POSITION PRIOR REAL REFRESH RENAME RESTRICT ROLLBACK SCHEMA SELECT SET SIMPLE START STORAGE RECHECK REINDEX REPEATABLE RETURNS ROW SCROLL SEQUENCE SETOF SMALLINT STATEMENT STRICT RECORD REJECTED REPLACE REVOKE ROWS SECOND SERIALIZABLE SHARE SOME STATISTICS SUBSTRING RECOVER RELATIVE RESET RIGHT RULE SECURITY SESSION SHOW SPLIT STDIN SYSDATE TABLESPACE THAN TIMETZ TREAT TRUSTED UNENCRYPT ED UNSEGMENT ED USING VALID VARYING WHERE TEMP THEN TO TRIGGER TYPE UNION UNTIL TEMPLATE TIME TOAST TRIM TEMPORARY TIMESTAMPK TRAILING TRUE UNIQUE UPDATE UNKNOWN USAGE VALIDATOR VERBOSE WITH VALINDEX VIEW WITHOUT VALUES VOLATILE WORK -53- .

SQL Reference Manual WRITE YEAR ZONE Reserved Words ALL AND AS CASE COLLATE CREATE CURRENT_TIME DEFAULT DISTINCT END FILLER FROM HAVING INTERSECT LIMIT NEW NOT OFFSET ONLY PLACING ANALYSE ANY ASC CAST COLUMN CURRENT_DATABASE CURRENT_TIMESTAMP DEFERRABLE DO EXCEPT FOR GRANT IN_P INTO LOCALTIME NODE NULL OLD OR PRIMARY ANALYZE ARRAY BOTH CHECK CONSTRAINT CURRENT_DATE CURRENT_USER DESC ELSE FALSE FOREIGN GROUP INITIALLY LEADING LOCALTIMESTAMP NODES OFF ON ORDER REFERENCES -54- .

SQL Language Elements SCHEMA SOME TO UNIQUE USING SELECT TABLE TRAILING UNION WHEN SESSION_USER THEN TRUE_P USER WHERE Identifiers Identifiers (names) of objects such as schema. Dollar sign is not allowed in identifiers according to the SQL standard and could cause application portability problems. such as names that include only numeric characters ("123") or contain space characters. Note: Identifiers are not case-sensitive. "ABc". Unquoted Identifiers Unquoted SQL identifiers must begin with one of the following: • • • • • An alphabetic character (A-Z or a-z. and so on. and so on. ABc. for example '""'. Thus. you need a pair of them. Constants Constants are numbers or strings. can be up to 128 bytes in length. table. punctuation marks. including letters with diacritical marks and non-Latin letters) Underscore (_) Subsequent characters in an identifier can be: Alphabetic Numeric (0-9) Dollar sign ($). You can use names that would otherwise be invalid. If you want to include a double quote. column names. Quoted Identifiers Identifiers enclosed in double quote (") characters can contain any character. identifiers "ABC". -55- . as are ABC. and aBc. and "aBc" are synonymous. keywords. projection.

even itself. There cannot be any spaces or other characters embedded in the constant. if one is present. In other words. In most cases a numeric constant is automatically coerced to the most appropriate type depending on context. if one is used. • • Examples 42 3.digits[e[+-]digits] | digitse[+-]digits | NaN Parameters digits represents one or more numeric characters (0 through 9). otherwise it is presumed to be DOUBLE PRECISION.001 5e2 1. A numeric constant that contains neither a decimal point nor an exponent is initially presumed to be type INTEGER if its value fits. See Numeric Expressions (page 78) for examples.[digits][e[+-]digits] | [digits]. you can force a numeric value to be interpreted as a specific data type by casting it as described in Data Type Coercion Operators (CAST) (page 66). they are unary operators applied to the constant. comparisons always return false whenever a NaN is involved.SQL Reference Manual Numeric Constants Syntax digits | digits.5 4. . Leading plus or minus signs are not actually considered part of the constant. Notes • • • • • • At least one digit must be before or after the decimal point. At least one digit must follow the exponent marker (e). Note: You can use NaN in expressions. Vertica follows the IEEE specification for floating point. When necessary. A NaN is not greater than and at the same time not less than anything.925e-3 -56- . including NaN (not a number).

Using Single Quotes in a String The SQL standard way of writing a single-quote character within a string constant is to write two adjacent single quotes. backslash. Notes • • A dollar-quoted string that follows a keyword or identifier must be separated from it by whitespace. Vertica SQL provides "dollar quoting. It is particularly useful when representing string constants inside other constants. They treat each byte as a character. Syntax $$characters$$ Parameters characters is an arbitrary sequence of UTF-8 characters bounded by paired dollar signs ($$).SQL Language Elements String Constants (Dollar-Quoted) The standard syntax for specifying string constants can be difficult to understand when the desired string contains many single quotes or backslashes. for example: 'Chester''s gorilla' -57- . otherwise the dollar quoting delimiter would be taken as part of the preceding identifier. Dollar-quoted string content is treated as a literal. and dollar sign characters have no special meaning within a dollar-quoted string. To allow more readable queries in such situations. Single quote. but it is often a more convenient way to write complicated string literals than the standard-compliant single quote syntax. Examples SELECT $$Fred's\n car$$." Dollar quoting is not part of the SQL standard. The string functions do not handle multibyte UTF-8 sequences correctly. ?column? ------------------Fred's\n car (1 row) String Constants (Standard) Syntax 'characters' Parameters characters is an arbitrary sequence of UTF-8 characters bounded by single quotes (').

and others. • • • • • • • \\ is a backslash \b is a backspace \f is a form feed \n is a newline \r is a carriage return \t is a tab \xxx. ?column? -----------------This is a string (1 row) Date/Time Constants Date or time literal input must be enclosed in single quotes. For example: 'Chesters\'s gorilla' C-style Backslash Escapes Vertica SQL also supports the following C-style backslash escapes. Input is accepted in almost any reasonable format. Examples SELECT 'This is a string'.SQL Reference Manual Vertica SQL also allows single quotes to be escaped with a backslash (\).) Notes • • Vertica supports the UTF-8 character set. Any other character following a backslash is taken literally. Time zones in the real world have little meaning unless associated with a date as well as a time. However.The exact parsing rules of date/time input and for the recognized text fields including months. Time Zone Values Vertica attempts to be compatible with the SQL standard definitions for time zones. SQL-compatible. the TIME (page 97) type can. where xxx is an octal number representing a byte with the corresponding code. They treat each byte as a character. Obvious problems are: • Although the DATE (page 96) type does not have an associated time zone. including ISO 8601. the SQL standard has an odd mix of date and time types and capabilities. and time zones are described in Date/Time Expressions (page 76). since the offset can vary through the year with daylight-saving time boundaries. traditional POSTGRES. days of the week. (It is your responsibility that the byte sequences you create are valid characters in the server character set encoding. -58- . The character with the code zero cannot be in a string constant. The string functions not handle multibyte UTF-8 sequences correctly. Vertica is more flexible in handling date/time input than the SQL standard requires.

Vertica recommends using DATE/TIME types that contain both date and time when using time zones. even though Vertica supports it for legacy applications and for compliance with the SQL standard. particularly with respect to daylight-savings rules. The default time zone is specified as a constant numeric offset from UTC.SQL Language Elements Vertica assumes your local time zone for any data type containing only date or time. Time zones and time-zone conventions are influenced by political decisions. We recommend not using the type TIME WITH TIME ZONE. WEDS -59- . Time zones around the world became somewhat standardized during the 1900's. To address these difficulties. Times outside that range are taken to be in "standard time" for the selected time zone. TUES WED. but continue to be prone to arbitrary changes. Vertica currently supports daylight-savings rules over the time period 1902 through 2038 (corresponding to the full range of conventional UNIX system time). not just earth geometry. It is therefore not possible to adapt to daylight-saving time when doing date/time arithmetic across DST boundaries. Example Description • • PST -8:00 -800 -8 zulu z Pacific Standard Time ISO-8601 offset for PST ISO-8601 offset for PST ISO-8601 offset for PST Military abbreviation for UTC Short form of zulu Day of the Week Names The following tokens are recognized as names of days of the week: Day Abbreviations SUNDAY MONDAY TUESDAY WEDNESD AY SUN MON TUE. no matter what part of the year they fall in.

SQL Reference Manual THURSDAY THU. -60- . SEPT OCT NOV DEC Interval Values An interval value represents the duration between two points in time. THUR. THURS FRI SAT FRIDAY SATURDAY Month Names The following tokens are recognized as names of months: Month Abbreviations JANUARY FEBRUAR Y MARCH APRIL MAY JUNE JULY AUGUST SEPTEMB ER OCTOBER NOVEMBE R DECEMBE R JAN FEB MAR APR MAY JUN JUL AUG SEP.

AGO The amounts of different units are implicitly added up with appropriate sign accounting. interval --------------------------------------------296533 years 3 mons 21 days 04:00:54. • • • Examples SELECT INTERVAL '1 12:59:10'. -61- .775807 (1 row) SELECT INTERVAL '-9223372036854775807 usec'. hours.5 hours'. minutes.775807 The range of an interval constant is +/. and seconds can be specified without explicit unit markings. Notes • Quantities of days.. For example: '1 12:59:10' is read the same as '1 day 12 hours 59 min 10 sec' The boundaries of an interval constant are: § '9223372036854775807 usec' to '9223372036854775807 usec ago' § 296533 years 3 mons 21 days 04:00:54. ] [ AGO ] Parameters @ quantity unit (at sign) is optional and ignored Is an integer numeric constant (page 56) Is one of the following units or abbreviations or plurals of the following units: MONTH SECOND YEAR MINUTE DECADE HOUR CENTURY DAY MILLENNIUM WEEK [Optional] specifies a negative interval value (an interval going back in time).775807 to -296533 years -3 mons -21 days -04:00:54..SQL Language Elements Syntax [ @ ] quantity unit [ quantity unit.263 .1 (plus or minus two to the sixty-third minus one) microseconds. interval -------------------------------------------------296533 years -3 mons -21 days -04:00:54. 'AGO' is a synonym for '-'.775807 (1 row) SELECT INTERVAL '-1 day 48. interval ---------------1 day 12:59:10 (1 row) SELECT INTERVAL '9223372036854775807 usec'. In Vertica. the interval fields are additive and accept large floating point numbers.

Operators Operators are logical.'epoch'. Binary Operators Each of the functions in the table below works with binary and varbinary data types. 07'. TIME ZONE '1999-10-01' + INTERVAL '1 month . ?column? --------------------2007-03-01 00:00:00 (1 row) SELECT TIMESTAMP WITHOUT Oct 31 --------------------1999-10-30 23:59:59 (1 row) INSERT 07' . 07' + INTERVAL '30 days'. and equality symbols used in SQL to evaluate.1 second' AS "Oct 31" INTO timestamp_arth values (1. compare.'allballs'.TIMESTAMP 'Mar 1.TIMESTAMP 'Feb 1.SQL Reference Manual interval ---------------1 day 00:30:00 (1 row) SELECT TIMESTAMP 'Apr 1. or calculate values. Operato r '=' '<>' '<' Function binary_eq binary_ne binary_lt Description Equal to Not equal to Less than -62- . Since binary can implicitly be converted to varbinary. 07' . ?column? ---------1 mon (1 row) SELECT TIMESTAMP 'Feb 1. Vertica: t1 |t2 |t3 | t4 | t5 | t6 | t7 ----+-----------+------------+----------+------------------------------------+------------------------+---------1 | -infinity | 1970-01-01 | 00:00:00 | 32150 years 3 mons 2 days 10:55:39 | 1999-01-27 09:09:09-05 | 09:09:09. 07'. the result is equivalent to 'ab\\000'::varbinary(3).'18days 09 hours 09 mins 999999999999 secs'. these functions work for binary types as well. ?column? ---------1 mon (1 row) SELECT TIMESTAMP 'Mar 1.'-infinity'. mathematical. Keep in mind that when the binary value 'ab'::binary(3) is translated to varbinary.

'ab'::VARBINARY(3) || 'cd'::VARBINARY(2) AS cat2 . Given values 'ff' and 'f'. Examples Note that the zero byte is not removed when values are concatenated.SQL Language Elements '<=' '>' '>=' '&' '~' '|' '#' '||' binary_le binary_gt binary_ge binary_and binary_not binary_or binary_xor binary_cat Less than or equal to Greater than Greater than or equal to And Not Or Either or Concatenate Notes If the arguments vary in length these operators treat the values as though they are all equal in length by right-extending the smaller values with the zero byte to the full width of the column (except when using the binary_cat function). cat1 | cat2 ----------+-----ab\000cd | abcd (1 row) The remaining examples illustrate the behavioral differences for binary operands: The & operator: SELECT TO_HEX(HEX_TO_BINARY('0x10') & HEX_TO_BINARY('0xF0')). the result is null if any argument is null. Operators are strict with respect to nulls. value 'f' is treated as 'f0'. null <> 'a'::binary returns null. to_hex -------10 (1 row) The | operator: SELECT TO_HEX(HEX_TO_BINARY('0x10') | HEX_TO_BINARY('0xF0')). to_hex -------f0 (1 row) The # operator: SELECT TO_HEX(HEX_TO_BINARY('0x10') # HEX_TO_BINARY('0xF0')). That is. for example. For example: SELECT 'ab'::BINARY(3) || 'cd'::BINARY(2) AS cat1. to_hex -63- . For example.

to_hex -------0f (1 row) -64- .SQL Reference Manual -------e0 (1 row) The ~ operator: SELECT TO_HEX(~ HEX_TO_BINARY('0xF0')).

-65- . When it is essential to force evaluation order. that is. False. you can switch the left and right operand without affecting the result." a b a AND b TRUE a OR b TRUE TRUE TRUE FALSE NULL NULL TRUE TRUE TRUE TRUE FALSE FALSE NULL NULL FALSE FALSE FALSE FALSE NULL NULL NULL FALSE NULL a TRUE FALS E NULL Notes • • NOT a FALSE TRUE NULL The operators AND and OR are commutative. However. which can have only two values: true and false. or NULL. All comparison operators are binary operators that return values of True. the order of evaluation of subexpressions is not defined.65 Boolean Operators Syntax [ AND | OR | NOT ] Parameters SQL uses a three-valued Boolean logic where the null value represents "unknown. Comparison Operators Comparison operators are available for all data types where comparison makes sense. use a CASE (page 73) construct. Do not confuse Boolean operators with the Boolean-predicate (page 80) or the Boolean (page 93) data type.

Syntax CAST ( expression AS data-type ) expression::data-type data-type 'string' Parameters expression data-type Is an expression of any type Converts the value of expression to one of the following data types: BINARY (page 89) BOOLEAN (page 93) CHARACTER (page 94) DATE/TIME (page 96) NUMERIC (page 103) Notes Vertica syntax requires the '::' operator to perform data type coercion (casting). if both operands are NULL. -66- . if one operand is NULL. and false. but it returns true. not equal Data Type Coercion Operators (CAST) Data type coercion (casting) passes an expression value to an input conversion routine for a specified data type. The comparison operators return NULL (signifying "unknown") when either operand is null. It is not possible to implement != and <> operators that do different things.SQL Reference Manual Syntax and Parameters < > <= >= less than greater than less than or equal to greater than or equal to = or <=> equal <> or != Notes • • • The != operator is converted to <> in the parser stage. instead of NULL. resulting in a constant of the indicated type. instead of NULL. The <=> operator performs an equality comparison like the = operator.

binary | varbinary ------------+----------ab\000\000 | ab (1 row) Examples SELECT CAST((2 + 2) AS VARCHAR).2 (1 row) Date/Time Operators Syntax [ + | . ?column? ---------4. it is automatically coerced to the column's data type. varchar --------4 (1 row) SELECT '2. ERROR: invalid input syntax for integer: "2. If a binary value is cast (implicitly or explicitly) to a binary type with a smaller length. On binary data that contains a value with fewer bytes than the target column.2" SELECT FLOAT '2. The explicit type cast can be omitted if there is no ambiguity as to the type the constant must be.SQL Language Elements Type coercion format of data-type 'string' can be used only to specify the data type of a quoted string constant. the value is silently truncated.2' + 2. varchar --------4 (1 row) SELECT (2 + 2)::VARCHAR. values are right-extended with the zero byte '\0' to the full width of the column.2' + 2. when a constant is assigned directly to a column. For example: SELECT 'abcd'::BINARY(2). Trailing zeroes on variable length binary values are not right-extended: SELECT 'ab'::BINARY(4). binary -------ab (1 row) No casts other than BINARY to and from VARBINARY and resize operations are currently supported. 'ab'::VARBINARY(4).| * | / ] -67- . For example.

INTERVAL '23 HOURS' INTERVAL '1 DAY' .TIMESTAMP '2001-09-27 12:00' 900 * INTERVAL '1 SECOND' 21 * INTERVAL '1 DAY' DOUBLE PRECISION '3.SQL Reference Manual Parameters + * / Addition Subtraction Multiplication Division Notes • The operators described below that take TIME or TIMESTAMP inputs actually come in two variants: one that takes TIME WITH TIME ZONE or TIMESTAMP WITH TIME ZONE.TIME '03:00' TIME '05:00' .5' Result Type DATE TIMESTAMP TIMESTAMP INTERVAL TIMESTAMP TIME INTERVAL INTEGER DATE TIMESTAMP INTERVAL TIME TIMESTAMP INTERVAL INTERVAL INTERVAL INTERVAL INTERVAL INTERVAL Result '2001-10-05' '2001-09-28 01:00:00' '2001-09-28 03:00:00' '1 DAY 01:00:00' '2001-09-29 00:00:00' '04:00:00' '-23:00:00' '3' '2001-09-24' '2001-09-27 23:00:00' '02:00:00' '03:00:00' '2001-09-28 00:00:00' '1 DAY -01:00:00' '1 DAY 15:00:00' '00:15:00' '21 DAYS' '03:30:00' '00:40:00' Mathematical Operators Mathematical operators are provided for many data types. The + and * operators come in commutative pairs (for example both DATE + INTEGER and INTEGER + DATE). Operator ! + Description Factorial Addition Example 5 ! 2 + 3 Result 120 5 -68- . and one that takes TIME WITHOUT TIME ZONE or TIMESTAMP WITHOUT TIME ZONE.DATE '2001-09-28' DATE '2001-10-01' .INTERVAL '2 HOURS' TIMESTAMP '2001-09-28 23:00' . these variants are not shown separately. only one of each such pair is shown.INTERVAL '1 HOUR' TIME '05:00' .INTERVAL '23 HOURS' DATE '2001-10-01' . • Example DATE '2001-09-28' + INTEGER '7' DATE '2001-09-28' + INTERVAL '1 HOUR' DATE '2001-09-28' + TIME '03:00' INTERVAL '1 DAY' + INTERVAL '1 HOUR' TIMESTAMP '2001-09-28 01:00' + INTERVAL '23 HOURS' TIME '01:00' + INTERVAL '3 HOURS' .5' * INTERVAL '1 HOUR' INTERVAL '1 HOUR' / DOUBLE PRECISION '1. For brevity.INTEGER '7' DATE '2001-09-28' .INTERVAL '1 HOUR' TIMESTAMP '2001-09-29 03:00' .

978600750905 (1 row) Factorial is defined in term of the gamma function.0 !! 5 @ -5.98!. Vertica supports the use of the factorial operators on positive and negative floating point (DOUBLE PRECISION (page 105)) numbers as well as integers. where (-1) = Infinity and the other negative integers are undefined.sfu.ca/~cbm/aands/ (1964) Section 6.SQL Language Elements * / % ^ |/ ||/ !! @ & | # ~ << >> 2 .5.0 ^ 3. constructs: expression ISNULL expression NOTNULL -69- . See the Handbook of Mathematical Functions http://www. Factorial is defined as z! = gamma(z+1) for all complex numbers z. whereas the others are available for all numeric data types. ?column? -----------------115. use the constructs: expression IS NULL expression IS NOT NULL Alternatively.1.math.0 91 & 15 32 | 3 17 # 5 ~1 1 << 4 8 >> 2 -1 6 2 1 8 5 3 120 5 11 35 20 -2 16 2 Subtraction Multiplication Division (integer division truncates results) Modulo (remainder) Exponentiation Square root Cube root Factorial (prefix operator) Absolute value Bitwise AND Bitwise OR Bitwise XOR Bitwise NOT Bitwise shift left Bitwise shift right Notes • • The bitwise operators work only on integer data types.0 |/ 25. but nonstandard. • • NULL Operators To check whether a value is or is not NULL. For example (-4)! = NaN -4! = -(4!) = -24.0 ||/ 27.3 2 * 3 4 / 2 5 % 4 2. use equivalent. For example: SELECT 4.

) This behavior conforms to the SQL standard. Note: Some applications might expect that expression = NULL returns true if expression evaluates to the null value. Syntax string || string Parameters string Notes Two consecutive strings within a single SQL statement on separate lines are concatenated. ?column? ---------automobile (1 row) Is an expression of type CHAR or VARCHAR Expressions SQL expressions are the components of a query that compare a value or values against other values. They can also perform calculations.SQL Reference Manual Do not write expression = NULL because NULL is not "equal to" NULL. String Concatenation Operators To concatenate two strings on a single line. use the concatenation operator (two consecutive vertical bars). Expressions found inside any SQL command are usually in the form of a conditional statement. Operator Precedence The following table shows operator precedence in decreasing (high to low) order. and it is not known whether two unknown values are equal. -70- . ?column? ---------automobile (1 row) SELECT 'auto' 'mobile'. Vertica strongly recommends that these applications be modified to comply with the SQL standard. Examples SELECT 'auto' || 'mobile'. (The null value represents an unknown value.

Operator/Element . rather than relying on operator precedence. To force evaluation in a specific order. :: [ ] ^ * / % + IS Associativity Description left left left right left left left table/column name separator typecast array element selection unary minus exponentiation multiplication. the inputs of an operator or function are not necessarily evaluated left-to-right or in any other fixed order. For example. -71- . division. recommends that you specify the order of operation using parentheses. Vertica Systems.SQL Language Elements Note: When an expression includes more than one operator. y WHERE x <> 0 AND y/x > 1. greater than IN BETWEEN OVERLAPS LIKE < > = NOT AND OR right right left left equality. this is an untrustworthy way of trying to avoid division by zero in a WHERE clause: SELECT x. use a CASE (page 73) construct. subtraction IS TRUE. IS UNKNOWN. Inc.5. assignment logical negation logical conjunction logical disjunction Expression Evaluation Rules The order of evaluation of subexpressions is not defined. modulo addition. In particular. IS FALSE. IS NULL set membership range containment time interval overlap string pattern matching less than.

SQL Reference Manual

But this is safe:
SELECT x, y WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END;

A CASE construct used in this fashion defeats optimization attempts, so it should only be done when necessary. (In this particular example, it would doubtless be best to sidestep the problem by writing y > 1.5*x instead.)

Aggregate Expressions
An aggregate expression represents the application of an aggregate function (page 112) across the rows or groups of rows selected by a query. Using AVG as an example, the syntax of an aggregate expression is one of the following. Invokes the aggregate across all input rows for which the given expression yields a non-null value:
AVG (expression)

Is the same as AVG(expression), since ALL is the default:
AVG (ALL expression)

Invokes the AVG function across all input rows for all distinct, non-null values of the expression, where expression is any value expression that does not itself contain an aggregate expression.
AVG (DISTINCT expression)

An aggregate expression only can appear in the select list or HAVING clause of a SELECT statement. It is forbidden in other clauses, such as WHERE, because those clauses are evaluated before the results of aggregates are formed.

-72-

73

CASE Expressions
The CASE expression is a generic conditional expression that can be used wherever an expression is valid. It is similar to case and if/then/else statements in other languages. Syntax (form 1)
CASE WHEN condition THEN result [ WHEN condition THEN result ]... [ ELSE result ] END

Parameters
condition Is an expression that returns a boolean (true/false) result. If the result is false, subsequent WHEN clauses are evaluated in the same manner. Specifies the value to return when the associated condition is true. If no condition is true then the value of the CASE expression is the result in the ELSE clause. If the ELSE clause is omitted and no condition matches, the result is null.

result ELSE result

Syntax (form 2)
CASE expression WHEN value THEN result [ WHEN value THEN result ]... [ ELSE result ] END

Parameters
expression Is an expression that is evaluated and compared to all the value specifications in the WHEN clauses until one is found that is equal. Specifies a value to compare to the expression. Specifies the value to return when the expression is equal to the specified value. Specifies the value to return when the expression is not equal to any value; if no ELSE clause is specified, the value returned is null.

value result ELSE result

Notes The data types of all the result expressions must be convertible to a single output type.

-73-

SQL Reference Manual

Examples
SELECT * FROM test; a --1 2 3 SELECT a, CASE WHEN a=1 THEN 'one' WHEN a=2 THEN 'two' ELSE 'other' END FROM test; a | case ---+------1 | one 2 | two 3 | other SELECT a, CASE a WHEN 1 THEN 'one' WHEN 2 THEN 'two' ELSE 'other' END FROM test; a | case ---+------1 | one 2 | two 3 | other

Special Example A CASE expression does not evaluate subexpressions that are not needed to determine the result. You can use this behavior to avoid division-by-zero errors:
SELECT x FROM T1 WHERE CASE WHEN x <> 0 THEN y/x > 1.5 ELSE false END;

Column References
Syntax
[ [ schemaname. ] tablename. ] columnname

Parameters
schemaname Is the name of the schema

-74-

SQL Language Elements tablename

Is one of: The name of a table An alias for a table defined by means of a FROM clause in a query Is the name of a column that must be unique across all the tables being used in a query

columnname

Notes There are no space characters in a column reference. If you do not specify a schemaname, Vertica searches the existing schemas according to the order defined in the SET SEARCH_PATH command. Example This example uses the schema from the VMart Example Database. In the following command, transaction_type and transaction_time are the unique column references, store is the name of the schema, and store_sales_fact is the table name:
SELECT transaction_type, transaction_time FROM store.store_sales_fact ORDER BY transaction_time; transaction_type | transaction_time ------------------+-----------------purchase | 00:00:23 purchase | 00:00:32 purchase | 00:00:54 purchase | 00:00:54 purchase | 00:01:15 purchase | 00:01:30 purchase | 00:01:50 return | 00:03:34 return | 00:03:35 purchase | 00:03:39 purchase | 00:05:13 purchase | 00:05:20 purchase | 00:05:23 purchase | 00:05:27 purchase | 00:05:30 purchase | 00:05:35 purchase | 00:05:35 purchase | 00:05:42 return | 00:06:36 purchase | 00:06:39 (20 rows)

Comments
A comment is an arbitrary sequence of characters beginning with two consecutive hyphen characters and extending to the end of the line. For example:
-- This is a standard SQL comment

-75-

SQL Reference Manual

A comment is removed from the input stream before further syntax analysis and is effectively replaced by white space. Alternatively, C-style block comments can be used where the comment begins with /* and extends to the matching occurrence of */.
/* multiline comment * with nesting: /* nested block comment */ */

These block comments nest, as specified in the SQL standard. Unlike C, you can comment out larger blocks of code that might contain existing block comments.

Date/Time Expressions
Vertica uses an internal heuristic parser for all date/time input support. Dates and times are input as strings, and are broken up into distinct fields with a preliminary determination of what kind of information might be in the field. Each field is interpreted and either assigned a numeric value, ignored, or rejected. The parser contains internal lookup tables for all textual fields, including months, days of the week, and time zones. The date/time type inputs are decoded using the following procedure. • • • • • • • • • • • • • • Break the input string into tokens and categorize each token as a string, time, time zone, or number. If the numeric token contains a colon (:), this is a time string. Include all subsequent digits and colons. If the numeric token contains a dash (-), slash (/), or two or more dots (.), this is a date string which might have a text month. If the token is numeric only, then it is either a single field or an ISO 8601 concatenated date (for example, 19990113 for January 13, 1999) or time (for example, 141516 for 14:15:16). If the token starts with a plus (+) or minus (-), then it is either a time zone or a special field. If the token is a text string, match up with possible strings. Do a binary-search table lookup for the token as either a special string (for example, today), day (for example, Thursday), month (for example, January), or noise word (for example, at, on). Set field values and bit mask for fields. For example, set year, month, day for today, and additionally hour, minute, second for now. If not found, do a similar binary-search table lookup to match the token with a time zone. If still not found, throw an error. When the token is a number or number field: If there are eight or six digits, and if no other date fields have been previously read, then interpret as a "concatenated date" (for example, 19990118 or 990118). The interpretation is YYYYMMDD or YYMMDD. If the token is three digits and a year has already been read, then interpret as day of year. If four or six digits and a year has already been read, then interpret as a time (HHMM or HHMMSS).

-76-

SQL Language Elements

If three or more digits and no date fields have yet been found, interpret as a year (this forces yy-mm-dd ordering of the remaining date fields). • Otherwise the date field ordering is assumed to follow the DateStyle setting: mm-dd-yy, dd-mm-yy, or yy-mm-dd. Throw an error if a month or day field is found to be out of range. • If BC has been specified, negate the year and add one for internal storage. (There is no year zero in the Gregorian calendar, so numerically 1 BC becomes year zero.) • If BC was not specified, and if the year field was two digits in length, then adjust the year to four digits. If the field is less than 70, then add 2000, otherwise add 1900. Tip: Gregorian years AD 1-99 can be entered by using 4 digits with leading zeros (for example, 0099 is AD 99). Month Day Year Ordering For some formats, ordering of month, day, and year in date input is ambiguous and there is support for specifying the expected ordering of these fields. See Date/Time Run-Time Parameters for information about output styles. Special Date/Time Values Vertica supports several special date/time values for convenience, as shown below. All of these values need to be written in single quotes when used as constants in SQL statements.

The values INFINITY and -INFINITY are specially represented inside the system and are displayed the same way. The others are simply notational shorthands that are converted to ordinary date/time values when read. (In particular, NOW and related strings are converted to a specific time value as soon as they are read.)
String Valid Data Types Description

epoch

DATE, TIMESTAMP

1970-01-01 00:00:00+00 (UNIX SYSTEM TIME ZERO) Later than all other time stamps Earlier than all other time stamps Current transaction's start time Note: NOW is not the same as the NOW (on
page 159) function.

INFINITY -INFINITY NOW

TIMESTAMP TIMESTAMP DATE, TIME, TIMESTAMP

TODAY TOMORROW YESTERDAY ALLBALLS

DATE, TIMESTAMP DATE, TIMESTAMP DATE, TIMESTAMP TIME

Midnight today Midnight tomorrow Midnight yesterday 00:00:00.00 UTC

-77-

SQL Reference Manual

The following SQL-compatible functions can also be used to obtain the current time value for the corresponding data type: • • • • • CURRENT_DATE (page 143) CURRENT_TIME (page 144) CURRENT_TIMESTAMP (page 144) LOCALTIME (page 156) LOCALTIMESTAMP (page 156)

The latter four accept an optional precision specification. (See Date/Time Functions (page 139).) Note however that these are SQL functions and are not recognized as data input strings.

NULL Value
NULL is a reserved keyword used to indicate that a data value is unknown. Be very careful when using NULL in expressions. NULL is not greater than, less than, equal to, or not equal to any other expression. Use the Boolean-predicate (on page 80) for determining whether or not an expression value is NULL. Notes NULL appears last (largest) in ascending order. Vertica also accepts NUL characters ('\0') in constant strings and no longer removes null characters from VARCHAR fields on input or output. NUL is the ASCII abbreviation of the NULL character. See Also GROUP BY Clause (page 388)

Numeric Expressions
Vertica follows the IEEE specification for floating point, including NaN. A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved. Examples
SELECT CBRT('Nan'); -- cube root cbrt -----NaN (1 row) SELECT 'Nan' > 1.0; ?column? ---------f (1 row)

-78-

SQL Language Elements

Predicates
In general, predicates are truth-valued functions; that is, when invoked, they return a truth value. Predicates have a set of parameters and arguments. For example, in the following example WHERE clause:
WHERE name = 'Smith';

• •

name = 'Smith' is the predicate 'Smith' is an expression

BETWEEN-predicate
The special BETWEEN predicate is available as a convenience. Syntax
a BETWEEN x AND y

Notes
a BETWEEN x AND y

Is equivalent to:
a >= x AND a <= y

Similarly:
a NOT BETWEEN x AND y

is equivalent to
a < x OR a > y

-79-

80

Boolean-predicate
The Boolean predicate retrieves rows where the value of an expression is true, false, or unknown (null). Syntax
expression IS [NOT] TRUE expression IS [NOT] FALSE expression IS [NOT] UNKNOWN

Notes • • • A Boolean predicate always return true or false, never null, even when the operand is null. A null input is treated as the value UNKNOWN. Do not confuse the boolean-predicate with Boolean Operators (on page 65) or the Boolean (page 93) data type, which can have only two values: true and false. IS UNKNOWN and IS NOT UNKNOWN are effectively the same as the NULL-predicate (page 86), except that the input expression does not have to be a single column value. To check a single column value for NULL, use the NULL-predicate (page 86).

-80-

81

column-value-predicate
Syntax
column-name comparison-op constant-expression

Parameters
column-name comparison-op constant-expression Is a single column of one the tables specified in the FROM clause (page 384). Is one of the comparison operators (on page 65). Is a constant value of the same data type as the column-name

Notes To check a column value for NULL, use the NULL-predicate (page 86). Examples
Dimension.column1 = 2 Dimension.column2 = 'Seafood'

-81-

A comma-separated list of constant values matching the data type of the column-expression Examples x IN (5. 6.82 IN-predicate Syntax column-expression [ NOT ] IN ( list-expression ) Parameters column-expression list-expression A single column of one the tables specified in the FROM clause (page 384). 7) -82- .

See Also Adding Primary Key and Foreign Key Constraints in the Administrator's Guide -83- .83 join-predicate Vertica supports only equi-joins based on a primary key-foreign key relationship between the joined tables. Syntax column-reference (see "Column References" on page 74) = column-reference (see "Column References" on page 74) Parameters column-reference Refers to a column of one the tables specified in the FROM clause (page 384).

ESCAPE escape-charact er Notes • • • • • LIKE requires the entire string expression to match the pattern. equivalent to NOT string LIKE pattern. To match the escape character itself. you would use ^\\ to specify a literal backslash. (Alternative: simply use four backslashes. For example. The pattern can contain one or more wildcard characters. ILIKE is equivalent to LIKE except that the match is case-insensitive (non-standard extension). Returns true if LIKE returns false. when preceding an underscore or percent sign character in the pattern. Causes character to be treated as a literal. Syntax string { LIKE | ILIKE } pattern [ESCAPE escape-character] string NOT { LIKE | ILIKE } pattern [ESCAPE escape-character] Parameters string NOT pattern (CHAR or VARCHAR) is the column value to be compared to the pattern. The default escape character is the backslash (/) character. which match all valid characters. tabs. To use a backslash character as a literal. and the reverse..) The use of a column data type other than character or character varying (implicit string conversion) is not supported and not recommended. the pattern must start and end with a percent sign. Specifies an escape-character. Specifies a string containing wildcard characters. specify a different escape character and use two backslashes. terminate each LIKE predicate pattern with the percent sign wildcard character. Percent sign (%) matches any string of zero or more characters. Underscore (_) matches any single character. Error messages caused by the LIKE predicate could refer to it by the following symbols instead of the actual keywords: ~~ LIKE ~~* ILIKE !~~ NOT LIKE !~~* NOT ILIKE -84- . A null escape character ('') disables the escape mechanism. To match a sequence of characters anywhere within a string. The LIKE predicate does not ignore trailing "white space" characters. etc. use two consecutive escape characters. if the escape character is circumflex (^). If the data values that you want to match have unknown numbers of trailing spaces. rather than a wildcard.84 LIKE-predicate Retrieves rows where the string value of a column matches a specified pattern.

SQL Language Elements Examples 'abc' 'abc' 'abc' 'abc' LIKE LIKE LIKE LIKE 'abc' 'a%' '_b_' 'c' true true true false -85- .

. } ) [NOT] EXISTS ( subquery ) condition AND condition condition OR condition ( condition ) ( condition . estimate ) condition IS [ NOT ] { TRUE | FALSE | UNKNOWN } Compare Parameters { = | > | < | >= | <= | <> | != } = > < >= equal greater than less than greater than or equal to -86- ...86 NULL-predicate Syntax column-name IS [ NOT ] NULL Parameters column-name Is a single column of one the tables specified in the FROM clause (page 384). HAVING clause. expr3 ] .. Examples a IS NULL b IS NOT NULL See Also NULL Value (page 78) Search Conditions Function To specify a search condition for a WHERE clause. or JOIN clause. expr1 . Syntax { | | | | | | | | | | expression compare expression expression IS [ NOT ] NULL expression [ NOT ] LIKE expression expression [ NOT ] IN ( { expression | subquery . expr2 [.

The result of a comparison is UNKNOWN if either value being compared is the NULL value. Rows for which the condition is UNKNOWN do not satisfy the search condition. For more information. Subqueries form an important class of expression that is used in many search conditions. In SQL. • • See Also Expressions (page 70) Subquery Expressions in the Programmer's Guide -87- . where conditions are either true or false. FALSE. Rows satisfy a search condition if and only if the result of the condition is TRUE. or UNKNOWN. see NULL Value (page 78). every condition evaluates as one of TRUE.SQL Language Elements <= <> or != less than or equal to not equal Notes • SQL conditions do not follow Boolean logic. This is called three-valued logic.

.

SQL Data Types Implicit Data Type Coercion When there is no ambiguity as to the data type of an expression value. or bytes. ?column? ---------4 (1 row) The quoted string constant '2' is implicitly coerced into an INTEGER value so that it can be the operand of an arithmetic operator (addition). See Also Data Type Coercion Operators (CAST) (page 66) Binary Data Types Store raw-byte data. while character strings store text. The allowable maximum length is the same for binary data types as it is for character data types. BINARY VARBINARY -89- . CHAR and VARCHAR. ?column? ---------42 (1 row) The result of the arithmetic expression 2 + 2 and the INTEGER constant 2 are implicitly coerced into VARCHAR values so that they can be concatenated. are similar to the character data types (page 94). respectively. except that binary data types contain byte strings. up to 65000 bytes. rather than character strings. Binary strings store raw-byte data. The binary data types. SELECT 2 + 2 || 2. rather than in characters. it is implicitly coerced to match the expected data type. BINARY and VARBINARY. A binary string is a sequence of octets. such as IP addresses. except that the length for BINARY and VARBINARY is a length in bytes. Syntax BINARY (length) { VARBINARY | BINARY VARYING | BYTEA } (max-length) Parameters length | max-length Specifies the length of the string. For example: SELECT 2 + '2'.

| and # binary operands have special behavior for binary data types. The default is the default attribute size. which in ASCII are {a.92. you must use the escape character (\) when inserting another backslash on input.c}. Inputs On input. where the maximum number of bytes is declared as an optional specifier to the type. For example. input '\141' as '\\141'. values are right-extended to the full width of the column with the zero byte. If length is omitted.SQL Reference Manual A fixed-width string of length bytes.\. where the number of bytes is declared as an optional specifier to the type. Where necessary.b. the default is 1. with the exception of the backslash ('\'). Binary values can also be represented in octal format by prefixing the value with a backslash '\'.98. A byte in the range of printable ASCII characters (the range [0x20. For example: SELECT TO_HEX('ab'::VARBINARY(4)). The &. Both functions take a VARCHAR argument and return a VARBINARY value. to_hex -------6162 (1 row) Notes • • • BYTEA is a synonym for VARBINARY. You can use several formats when working with binary values (see Loading Binary Data). See the Examples section below. strings are translated from hexadecimal representation to a binary value using the HEX_TO_BINARY (page 204) function. for example. the hexadecimal value '0x61' can also be represented by the glyph 'a'. See Binary Operators (page 62) for details and examples. For example: SELECT TO_HEX('ab'::BINARY(4)). and the maximum length is 65000 bytes. For example. Outputs Like the input format the output format is a hybrid of octal codes and printable ASCII characters. Supported Aggregate Functions The following aggregate functions are supported for binary data types: -90- . are translated to text as 'a\\bc'. Varbinary values are not extended to the full width of the column. to_hex ---------61620000 (1 row) A variable-width string up to a length of max-length bytes. See Loading Binary Data in the Administrator's Guide for additional binary load formats. which is 80. the bytes {97. Note: If you use vsql. which is escaped as '\\'. All other byte values are represented by their corresponding octal values.99}. but the hexadecimal format is generally the most straightforward and is emphasized in Vertica documentation. You can also input values represented by printable characters. Strings are translated from bitstring representation to binary values using the BITSTRING_TO_BINARY (page 199) function. ~. 0x7e]) is represented by the corresponding ASCII character.

SQL Data Types • • • • • BIT_AND (page 113) BIT_OR (page 114) BIT_XOR (page 115) MAX (page 120) MIN (page 120) BIT_AND. and 'f'. Binary values can then be formatted in hex on output using the TO_HEX function: SELECT TO_HEX(c) FROM t. Note that the aggregate functions treat '0xff' like '0xFF00': INSERT INTO t values(HEX_TO_BINARY('0xFFFF')). INSERT INTO t values(HEX_TO_BINARY('0xFF')). null. while MAX and MIN are bytewise comparisons of binary values. create a sample table and projections with binary columns: CREATE TABLE t (c VARBINARY(2)). a binary aggregate ignores the null value and treats the value 'f' as 'f0'. Query table t to see column c output: SELECT TO_HEX(c) FROM t. if the values in a group vary in length. INSERT INTO t values(HEX_TO_BINARY('0xFF')). and BIT_XOR are bitwise operations that are applied to each non-null value in a group. like their binary operator counterparts. Examples The following example shows VARBINARY HEX_TO_BINARY (page 204)(VARCHAR) and VARCHAR TO_HEX (page 225)(VARBINARY) usage. CREATE PROJECTION t_p (c) AS SELECT c FROM t. and BIT_XOR functions are interesting when operating on a group of values. given a group containing the values 'ff'. See Data Type Coercion Operators (CAST) (page 66). Like their binary operator (page 62) counterparts. Also. these aggregate functions operate on VARBINARY types explicitly and operate on BINARY types implicitly through casts. INSERT INTO t values(HEX_TO_BINARY('0xF00F')). For example. Table t and and its projection are created with binary columns: CREATE TABLE t (c BINARY(1)). BIT_OR. BIT_OR. to_hex -------00 ff (2 rows) The BIT_AND. For example. the aggregate functions treat the values as though they are all equal in length by extending shorter values with zero bytes to the full width of the column. Insert minimum byte and maximum byte values: INSERT INTO t values(HEX_TO_BINARY('0x00')). SELECT IMPLEMENT_TEMP_DESIGN(''). to_hex -91- .

to_hex -------f000 (1 row) Issue the bitwise OR operation on (ff00|(ffff)|f00f): SELECT TO_HEX(BIT_OR(c)) FROM t. BIT_OR (page 114). TO_HEX (page 225). and TO_BITSTRING (page 224) -92- . to_hex -------ffff (1 row) Issue the bitwise XOR operation on (ff00#(ffff)#f00f): SELECT TO_HEX(BIT_XOR(c)) FROM t. an implicit GROUP BY operation is performed on results using (ff00&(ffff)&f00f): SELECT TO_HEX(BIT_AND(c)) FROM t. V6_NTOA (page 229). V6_SUBNETN (page 230). to_hex -------f0f0 (1 row) See Also Aggregate functions BIT_AND (page 113). HEX_TO_BINARY (page 204). MAX (page 120). LENGTH (page 211). BIT_XOR (page 115). V6_TYPE (page 231) Loading Binary Data in the Administrator's Guide String functions BITCOUNT (page 198). INET_NTOA (page 206). V6_ATON (page 228). V6_SUBNETA (page 230). REPEAT (page 217).SQL Reference Manual -------ff ffff f00f (3 rows) Now issue the bitwise AND operation. BITSTRING_TO_BINARY (page 199). and MIN (page 120) Binary Operators (page 62) COPY (page 323) for examples loading binary data Data Type Coercion Operators (CAST) (page 66) IP conversion function INET_ATON (page 205). SUBSTRING (page 223). Because these are aggregate functions.

Syntax BOOLEAN Parameters Valid literal data values for input are: TRUE FALSE Notes • • • • Do not confuse the BOOLEAN data type with Boolean Operators (on page 65) or the Boolean-predicate (on page 80). which has two states: true and false.SQL Data Types Boolean Data Type Vertica provides the standard SQL type BOOLEAN. 't' 'f' 'true' 'false' 'y' 'n' 'yes' 'no' '1' '0' -93- . Boolean values are output using the letters t and f. The keywords TRUE and FALSE are preferred and are SQL-compliant. All other values must be enclosed in single quotes. The third state in SQL boolean logic is unknown. which is represented by the NULL value.

String literals in SQL statements must be enclosed in single quotes. Usage CHAR is a fixed-length. See also GROUP BY Clause (page 388) for additional information about null ordering. varchar -94- . and variable-length strings are not padded. CHAR columns are right-extended with zeroes to the full width of the column. as needed. • • • • • ASCII NULs VARCHAR data types accept ASCII NULs. NULL characters are handled as ordinary characters. The default length is 80 and the maximum length is 65000 bytes. The following example casts the input string containing NUL values to VARCHAR: SELECT 'vert\0ica'::CHARACTER VARYING. The VARCHAR(length) field is processed internally as a NULL-padded string of maximum length n. VARCHAR is a variable-length character data type. numbers and symbols. Values terminate at their first NULL byte. data is truncated to the specified number of characters. the cost of over-estimating the length of these fields is incurred primarily at load time and during sorts. Due to compression in Vertica. Notes • When you define character columns. NULL appears last (largest) in ascending order. Character data can be stored as fixed-length or variable-length strings. if any. Remember to include the extra bytes required for multibyte characters in the column-width declaration. For example. The default length is 1 and the maximum length is 65000 bytes. you could use either of the following definitions: CHAR(24) /* fixed-length */ VARCHAR(24) /* variable-length */ If the data being loaded into a text column exceeds the maximum size for that type. the difference is that fixed-length strings are right-padded with spaces. if you want to store strings up to 24 characters in length. you specify the maximum size of any string to be stored in the column. Syntax [ CHARACTER | CHAR ] ( length ) [ VARCHAR | CHARACTER VARYING ] ( length ) Parameters length Specifies the length of the string.SQL Reference Manual Character Data Types Stores strings of letters. blank padded string. In CHAR fields. with no blank padding added to the storage of the strings.

SQL Data Types --------vert (1 row) The following example casts the input string containing NUL values to VARBINARY: SELECT 'vert\0ica'::BINARY VARYING. the result contains 8 characters. length -------8 (1 row) SELECT LENGTH('vert\0ica'::BINARY VARYING). length -------8 (1 row) -95- . but in the VARCHAR case. the '\000' is not visible: SELECT LENGTH('vert\0ica'::CHARACTER VARYING). varbinary ------------vert\000ica (1 row) In both cases.

and year. TIMESTAMP WITHOUT TIME ZONE. Notes • • • • • Vertica uses Julian dates for all date/time calculations. Vertica also supports the TIME WITH TIME ZONE data type. In Vertica.2425 days. 12 months. They can correctly predict and calculate any date more recent than 4713 BC to far into the future. a combination of DATE. For example: SELECT TIMESTAMP 'NOW'. Syntax DATE Parameters Low Value 4713 BC High Value 32767 AD Resolution 1 DAY See SET DATESTYLE (page 396) for information about ordering. and TIMESTAMP WITH TIME ZONE should provide a complete range of date/time functionality required by any application. in compliance with the SQL standard. Fields are either positive or negative. In most cases. All date/time types are stored in eight bits. • DATE Consists of a month. TIME. TIME ZONE is a synonym for TIMEZONE. However. intervals (page 102) are represented internally as some number of microseconds and printed as up to 60 seconds. January 8 in any mode (recommended format) January 8 in MDY mode. Example Description January 8. A date/time value of NULL appears first (smallest) in ascending order. and as many years as necessary. In Vertica. August 1 in DMY mode January 18 in MDY mode. All the date/time data types accept the special literal value NOW to specify the current date and time.96 Date/Time Data Types Vertica supports the full set of SQL date and time types. based on the assumption that the length of the year is 365. 24 hours. 1999 1999-01-08 1/8/1999 1/18/1999 Unambiguous in any datestyle input mode ISO 8601. 60 minutes. rejected in other modes -96- . day. 30 days.

there is no explicit bound on precision. except error in YMD mode January 8. The allowed range 0 to 6. 2003 in DMY mode February 3. Specifies that valid values must include a time zone WITH TIME ZONE -97- . 1999 in any mode ISO 8601.008 J2451187 January 8. By default. except error in YMD mode ISO 8601. 1999 in any mode Year and day of year Julian day Year 99 before the Common Era 1999-Jan-08 Jan-08-1999 08-Jan-1999 99-Jan-08 08-Jan-99 Jan-08-99 19990108 990108 1999. 2001 in YMD mode January 8 in any mode January 8 in any mode January 8 in any mode January 8 in YMD mode.SQL Data Types 01/02/03 January 2. January 8. else error January 8. January 8. 99 BC TIME Consists of a time of day with or without a time zone. 2003 in MDY mode February 1. Syntax TIME [ (p) ] [ { WITH | WITHOUT } TIME ZONE ] | TIMETZ [ AT TIME ZONE (see "TIME AT TIME ZONE" on page 99) ] Parameters p (Precision) specifies the number of fractional digits retained in the seconds field.

SQL Reference Manual WITHOUT TIME ZONE TIMETZ

Specifies that valid values do not include a time zone (default). If a time zone is specified in the input it is silently ignored. Is the same as TIME WITH TIME ZONE with no precision

Limits
Data Type
TIME [p]

Low Value 00:00:00.00 00:00:00.00+12

High Value 23:59:59.99 23:59:59.99-1 2

Resolution 1 MS / 14 digits 1 ms / 14 digits

TIME [p] WITH TIME ZONE

Example

Description

04:05:06.789 04:05:06 04:05 040506 04:05 AM

ISO 8601 ISO 8601 ISO 8601 ISO 8601 Same as 04:05; AM does not affect value Same as 16:05; input hour must be <= 12 ISO 8601 ISO 8601 ISO 8601 ISO 8601 Time zone specified by name

04:05 PM

04:05:06.789-8 04:05:06-08:00 04:05-08:00 040506-08 04:05:06 PST

-98-

SQL Data Types

TIME AT TIME ZONE
The TIME AT TIME ZONE construct converts TIMESTAMP and TIMESTAMP WITH ZONE types to different time zones. TIME ZONE is a synonym for TIMEZONE. Both are allowed in Vertica syntax. Syntax
timestamp AT TIME ZONE zone

Parameters
timestamp TIMESTAMP TIMESTAMP WITH TIME ZONE TIME WITH TIME ZONE zone Converts UTC to local time in given time zone Converts local time in given time zone to UTC Converts local time across time zones

Is the desired time zone specified either as a text string (for example: 'PST') or as an interval (for example: INTERVAL '-08:00'). In the text case, the available zone names are abbreviations.

Examples The local time zone is PST8PDT. The first example takes a zone-less timestamp and interprets it as MST time (UTC- 7) to produce a UTC timestamp, which is then rotated to PST (UTC-8) for display:
SELECT TIMESTAMP '2001-02-16 20:38:40' AT TIME ZONE 'MST'; timezone -----------------------2001-02-16 22:38:40-05 (1 row)

The second example takes a timestamp specified in EST (UTC-5) and converts it to local time in MST (UTC-7):
SELECT TIMESTAMP WITH TIME ZONE '2001-02-16 20:38:40-05' AT TIME ZONE 'MST'; timezone --------------------2001-02-16 18:38:40 (1 row)

TIMESTAMP
Consists of a date and a time with or without a time zone and with or without a historical epoch (AD or BC). Syntax
TIMESTAMP [ (p) ] [ { WITH | WITHOUT } TIME ZONE ] | TIMESTAMPTZ [ AT TIME ZONE (see "TIME AT TIME ZONE" on page 99) ]

-99-

SQL Reference Manual

Parameters
p (Precision) specifies the number of fractional digits retained in the seconds field. By default, there is no explicit bound on precision. The allowed range 0 to 6. Specifies that valid values must include a time zone. All TIMESTAMP WITH TIME ZONE values are stored internally in UTC. They are converted to local time in the zone specified by the time zone configuration parameter before being displayed to the client. Specifies that valid values do not include a time zone (default). If a time zone is specified in the input it is silently ignored. Is the same as TIMESTAMP WITH TIME ZONE.

WITH TIME ZONE

WITHOUT TIME ZONE TIMESTAMPTZ

Limits Data Type
TIMESTAMP [ (p) ] [ WITHOUT TIME ZONE ]

Low Value (rounded)
290279 BC 290279 BC

High Value (rounded)
294277 AD 294277 AD

Resolution
1 US / 14 digits 1 US / 14 digits

TIMESTAMP [ (p) ] WITH TIME ZONE

Notes • • AD/BC can appear before the time zone, but this is not the preferred ordering. The SQL standard differentiates TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP WITH TIME ZONE literals by the existence of a "+"; or "-". Hence, according to the standard: TIMESTAMP '2004-10-19 10:23:54' is a TIMESTAMP WITHOUT TIME ZONE. TIMESTAMP '2004-10-19 10:23:54+02' is a TIMESTAMP WITH TIME ZONE. Note: Vertica differs from the standard by requiring that TIMESTAMP WITH TIME ZONE literals be explicitly typed: TIMESTAMP WITH TIME ZONE '2004-10-19 10:23:54+02' • • If a literal is not explicitly indicated as being of TIMESTAMP WITH TIME ZONE, Vertica silently ignores any time zone indication in the literal. That is, the resulting date/time value is derived from the date/time fields in the input value, and is not adjusted for time zone. For TIMESTAMP WITH TIME ZONE, the internally stored value is always in UTC. An input value that has an explicit time zone specified is converted to UTC using the appropriate offset for that time zone. If no time zone is stated in the input string, then it is assumed to be in the time zone indicated by the system's TIME ZONE parameter, and is converted to UTC using the offset for the TIME ZONE zone.

-100-

SQL Data Types

When a TIMESTAMP WITH TIME ZONE value is output, it is always converted from UTC to the current TIME ZONE zone and displayed as local time in that zone. To see the time in another time zone, either change TIME ZONE or use the AT TIME ZONE (page 99) construct. Conversions between TIMESTAMP WITHOUT TIME ZONE and TIMESTAMP WITH TIME ZONE normally assume that the TIMESTAMP WITHOUT TIME ZONE value should be taken or given as TIME ZONE local time. A different zone reference can be specified for the conversion using AT TIME ZONE. TIMESTAMPTZ and TIMETZ are not parallel SQL constructs. TIMESTAMPTZ records a time and date in GMT, converting from the specified TIME ZONE. TIMETZ records the specified time and the specified time zone, in minutes, from GMT.timezone The following list represents typical date/time input variations: § 1999-01-08 04:05:06 § 1999-01-08 04:05:06 -8:00 § January 8 04:05:06 1999 PST Vertica supports adding a floating-point (in days) to a TIMESTAMP or TIMESTAMPTZ value. In Vertica, intervals (page 102) are represented internally as some number of microseconds and printed as up to 60 seconds, 60 minutes, 24 hours, 30 days, 12 months, and as many years as necessary. Fields are either positive or negative.

• •

Examples In the following example, Vertica returns results in years, months, and days, whereas other RDBMS might return results in days only:
SELECT TIMESTAMP WITH TIME ZONE '02/02/294276'- TIMESTAMP WITHOUT TIME ZONE '02/20/2009' AS result; result -----------------------------292266 years 11 mons 12 days (1 row)

To specify a specific time zone, add it to the statement, such as the use of 'ACST' in the following example:
SELECT T1 AT TIME ZONE 'ACST', t2 FROM test; timezone | t2 ---------------------+------------2009-01-01 04:00:00 | 02:00:00-07 2009-01-01 01:00:00 | 02:00:00-04 2009-01-01 04:00:00 | 02:00:00-06

You can specify a floating point in days:
SELECT 'NOW'::TIMESTAMPTZ + INTERVAL '1.5 day' AS '1.5 days from now'; 1.5 days from now ------------------------------2009-03-18 21:35:23.633-04 (1 row)

You can return infinity by simply specifying 'infinity':
SELECT TIMESTAMP 'infinity';

-101-

SQL Reference Manual timestamp ----------infinity (1 row)

The following example illustrates the difference between TIMESTAMPTZ with and without a precision specified:
SELECT TIMESTAMPTZ(3) 'now', TIMESTAMPTZ 'now'; timestamptz | timestamptz ----------------------------+------------------------------2009-02-24 11:40:26.177-05 | 2009-02-24 11:40:26.177368-05 (1 row)

The following statement returns an error because the TIMESTAMP is out of range:
SELECT TIMESTAMP '294277-01-09 04:00:54.775808'; ERROR: date/time field value out of range: "294277-01-09 04:00:54.775808"

INTERVAL
Measures the difference between two points in time. It is represented internally as a number of microseconds and printed out as up to: • 60 seconds • 60 minutes • 24 hours • 30 days • 12 months • As many years as necessary All the fields are either positive or negative. Syntax
INTERVAL [ (p) ]

Parameters
p (Precision) specifies the number of fractional digits retained in the seconds field in the range 0 to 6. The default is the precision of the input literal.

Notes Intervals can be expressed as a combination of fields, some positive and some negative. Vertica adds these all together to get some number of microseconds, counting days as 24 hours and months as 30 days. Examples
SELECT INTERVAL '-1 +02:03' AS "22 hours ago..."; 22 hours ago... -----------------21:57:00 (1 row)

-102-

SQL Data Types

SELECT INTERVAL '-1 days +02:03' AS "22 hours ago..."; 22 hours ago... -----------------21:57:00 (1 row) SELECT INTERVAL '10 years -11 month -12 days +13:14' AS "9 years..."; 9 years... -------------------------9 years 18 days 13:14:00 (1 row) SELECT'4 millenniums 5 centuries 4 decades 1 year 4 months 4 days 17 minutes 31 seconds'::i interval ----------------------------------4541 years 4 mons 4 days 00:17:31 (1 row)

See Also Interval Values (page 60) for a description of the values that can be represented in an INTERVAL type.

Numeric Data Types
Numeric data types are numbers (such as integers) stored in columns in the Vertica® Analytic Database. In Vertica, overflows in floats generate +/-infinity, and 0.0/0.0 returns NaN, per the IEEE floating point standard:
SELECT 0.0/0; ?column? ---------NaN (1 row)

For integers, dividing by zero by zero returns zero:
SELECT 0/0; ?column? ---------0 (1 row)

Dividing anything else by zero returns a runtime error.
SELECT 1/0; ERROR: division by zero

Add, subtract, and multiply ignore overflow. Sum and average use 128-bit arithmetic internally. Sum reports an error if the final result overflows, suggesting the use of sum_float(int), which converts the 128-bit sum to a float8. For example:
CREATE INSERT INSERT INSERT TEMP INTO INTO INTO TABLE t (i INT); t VALUES (1<<62); t VALUES (1<<62); t VALUES (1<<62);

-103-

SQL Reference Manual INSERT INTO t VALUES (1<<62); INSERT INTO t VALUES (1<<62); SELECT SUM(i) FROM t; ERROR: sum() overflower HINT: try sum_float() instead SELECT SUM_FLOAT(i) FROM t; sum_float --------------------2.30584300921369e+19

-104-

SQL Data Types

DOUBLE PRECISION (FLOAT)
Vertica supports the numeric data type DOUBLE PRECISION, which is the IEEE-754 8-byte floating point type, along with most of the usual floating point operations. Syntax
[ DOUBLE PRECISION | FLOAT | FLOAT8 ]

Parameters Note: On a machine whose floating-point arithmetic does not follow IEEE 754, these values probably do not work as expected. Double precision is an inexact, variable-precision numeric type. In other words, some values cannot be represented exactly and are stored as approximations. Thus, input and output operations involving double precision might show slight discrepancies. • • • For exact numeric storage and calculations (money for example), use INTEGER. Floating point calculations depend on the behavior of the underlying processor, operating system, and compiler. Comparing two floating-point values for equality might not work as expected.

Values COPY (page 323) accepts floating-point data in the following format: 1 Optional leading white space 2 An optional plus ("+") or minus sign ("-") 3 A decimal number, a hexadecimal number, an infinity, a NAN, or a null value A decimal number consists of a non-empty sequence of decimal digits possibly containing a radix character (decimal point "."), optionally followed by a decimal exponent. A decimal exponent consists of an "E" or "e", followed by an optional plus or minus sign, followed by a non-empty sequence of decimal digits, and indicates multiplication by a power of 10. A hexadecimal number consists of a "0x" or "0X" followed by a non-empty sequence of hexadecimal digits possibly containing a radix character, optionally followed by a binary exponent. A binary exponent consists of a "P" or "p", followed by an optional plus or minus sign, followed by a non-empty sequence of decimal digits, and indicates multiplication by a power of 2. At least one of radix character and binary exponent must be present. An infinity is either "INF" or "INFINITY", disregarding case. A NaN (Not A Number) is "NAN" (disregarding case) optionally followed by a sequence of characters enclosed in parentheses. The character string specifies the value of NAN in an implementation-dependent manner. (The Vertica internal representation of NAN is 0xfff8000000000000LL on x86 machines.) When writing infinity or NAN values as constants in a SQL statement, enclose them in single quotes. For example:
UPDATE table SET x = 'Infinity'

-105-

SQL Reference Manual

Note: Vertica follows the IEEE definition of NaNs (IEEE 754). The SQL standards do not specify how floating point works in detail. IEEE defines NaNs as a set of floating point values where each one is not equal to anything, even to itself. A NaN is not greater than and at the same time not less than anything, even itself. In other words, comparisons always return false whenever a NaN is involved. However, for the purpose of sorting data, NaN values must be placed somewhere in the result. The value generated 'NaN' appears in the context of a floating point number matches the NaN value generated by the hardware. For example, Intel hardware generates (0xfff8000000000000LL), which is technically a Negative, Quiet, Non-signaling NaN. Vertica uses a different NaN value to represent floating point NULL (0x7ffffffffffffffeLL). This is a Positive, Quiet, Non-signaling NaN and is reserved by Vertica The load file format of a null value is user defined, as described in the COPY (page 323) command. The Vertica internal representation of a null value is 0x7fffffffffffffffLL. The interactive format is controlled by the vsql printing option null. For example:
\pset null '(null)'

The default option is not to print anything. Rules • -0 == +0 • 1/0 = Infinity • 0/0 == Nan • NaN != anything (even NaN) To search for NaN column values, use the following predicate:
... WHERE column != column

This is necessary because WHERE column = 'Nan' cannot be true by definition. Sort Order (Ascending) • • • • • NaN -Inf numbers +Inf Null

Notes • • • Vertica does not support REAL (FLOAT4) or NUMERIC. NULL appears last (largest) in ascending order. All overflows in floats generate +/-infinity or NaN, per the IEEE floating point standard.

-106-

SQL Data Types

INTEGER
A signed 8-byte (64-bit) data type. Syntax
[ INTEGER | INT | BIGINT | INT8 ]

Parameters INT, INTEGER, INT8, and BIGINT are all synonyms for the same signed 64-bit integer data type. Integer data types of other lengths are not supported at this time. Automatic compression techniques are used to conserve disk space in cases where the full 64 bits are not required. Notes • • • • • The range of values is -2^63+1 to 2^63-1. 2^63 = 9,223,372,036,854,775,808 (19 digits). The value -2^63 is reserved to represent NULL. NULL appears first (smallest) in ascending order. Vertica does not have an explicit 4-byte (32-bit integer) type. Vertica's encoding and compression automatically eliminate extra spaces.

Restrictions • • • The JDBC type INTEGER is 4 bytes and is not supported by Vertica. Use BIGINT instead. Vertica does not support the SQL/JDBC types NUMERIC, SMALLINT, or TINYINT. Vertica does not check for overflow (positive or negative) except in the aggregate function SUM (page 122)(). If you encounter overflow when using SUM, use SUM_FLOAT (page 123)() which converts to floating point.

NUMERIC
Numeric data types store numeric data. For example, a money value of $123.45 could be stored in a NUMERIC(5,2) field. Syntax
NUMERIC | DECIMAL | NUMBER | MONEY [ ( precision [ , scale ] ) ]

Parameters
precision The number of significant decimal digits, or the number of digits that the data type stores. Precision p must be positive and <= 1024. Expressed in decimal digits and can be any integer representable in a 16-bit field. The default scale s is 0 <=scale <= precision; omitting scale is the same as s=0.

scale

-107-

-108- . Anything that can be used on an INTEGER can also be used on a NUMERIC. ?column? ---------f (1 row) Restrictions and Cautions • If you use a NUMERIC data type with a precision greater than 18. For example: SELECT 1. By contrast.SQL Reference Manual Notes • NUMERIC. and represent numeric values approximately. whole numbers only. as long as the precision is <= 18.1 + 2. MIN. and BLOCK_DICT are supported encoding types.1::float + 2. Note. and MONEY are all synonyms that return NUMERIC types. INTEGER (page 107) (and similar types) support ~18 digits. COUNT) § Comparison operators (<. NUMBER. This contrasts slightly with existing Vertica data types: DOUBLE PRECISION (page 105) (FLOAT) types support ~15 digits. NUMERIC is now preferred for non-INT constants.3. <=>. *) § Aggregation (SUM. DECIMAL. which should improve precision. =. however. <=. RLE. approximate numeric data types (DOUBLE PRECISION) use floating points and are less precise. that the default values for NUMBER and MONEY are implemented a bit differently: type | precision | scale • • • • • ---------+-----------+------NUMERIC 37 15 DECIMAL 37 15 NUMBER 38 0 MONEY 18 4 Numeric data types support exact representations of numbers that can be expressed with a number of digits before and after a decimal point. Numeric data types are generally called exact numeric data types because they store numbers of a specified precision and scale. performance could be affected. <>. -. ?column? ---------t (1 row) SELECT 1. Supported operations include the following: § Basic math (+. >. MAX.2 = 3.2::float = 3. variable exponent.3::float. >=) LZO.

78 (1 row) -109- . one with INTEGER data type and the other with a NUMERIC data type: CREATE TABLE num1 (id INTEGER. when doing division operations. amount. IMPLEMENT_TEMP_DESIGN ----------------------4 (1 row) Now insert some values into your table: INSERT INTO num1 VALUES (1.78). which creates a temporary physical schema design: SELECT IMPLEMENT_TEMP_DESIGN(''). CREATE TABLE Issue the following command.'). amount ----------123456. amount ----------123457. amount NUMERIC(8. For example.or +(optional).2)). LOG. prefixed by . from table num1: SELECT amount FROM num1. Anything beyond SUM/COUNT produces a FLOAT result.78 (1 row) The following syntax adds one (1) to the amount: SELECT amount+1 AS 'amount' FROM num1. OUTPUT -------1 (1 row) And do a simple SELECT on the table: SELECT * FROM num1. including SQRT. AVG and other aggregates. and TO_CHAR/TO_NUMBER formatting. Input Values COPY (page 323) accepts DECIMAL number with a decimal point ('. Examples The following command creates a table with two columns.SQL Data Types • Some of the more complex operations used with NUMERIC data types result in an implicit cast to FLOAT.78 (1 row) The following example returns the numeric column. id | amount ------+----------1 | 123456. the result is always FLOAT. 123456. transcendental functions.

78 (1 row) The following syntax casts the NUMERIC amount as a FLOAT: SELECT amount::float FROM num1. amount ----------246913. amount ----------123456.56 (1 row) The following syntax returns a negative number for the amount column: SELECT -amount FROM num1.78 (1 row) The following syntax returns the absolute value of the amount argument: SELECT ABS(amount) FROM num1.SQL Reference Manual The following syntax multiplies the amount column by 2: SELECT amount*2 AS 'amount' FROM num1. ?column? ------------123456. ABS ----------123456.78 (1 row) See Also Mathematical Functions (page 176) -110- .

This chapter describes the functions that Vertica supports. -111- .SQL Functions Functions return information from the database and are allowed anywhere an expression is allowed.

It returns a DOUBLE PRECISION value for a floating-point expression. For example SUM(x) + SUM(y) can be expressed as as SUM(x+y) (where x and y are NOT NULL). Vertica does not support nested aggregate functions. AVG Computes the average (arithmetic mean) of an expression over a group of rows.112 Aggregate Functions Aggregate functions summarize data over groups of rows from a query result set. avg ----------2500000. SUM of no rows returns NULL.5 (1 row) -112- . In particular. Notes • • • Except for COUNT. or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT AVG(annual_income) FROM customer_dimension. Otherwise.store_sales_fact. not zero. avg -------------2104270. The groups are specified using the GROUP BY (page 388) clause. these functions return a null value when no rows are selected. In some cases you can replace an expression that includes multiple aggregates with an single aggregate of an expression. the return value is the same as the expression data type.6485 (1 row) SELECT AVG(pos_transaction_number) FROM store. DOUBLE PRECISION. They are allowed only in the select list and in the HAVING (see "HAVING Clause" on page 390) and ORDER BY (see "ORDER BY Clause" on page 391) clauses of a SELECT (page 382) statement (as described in Aggregate Expressions (page 72)). Syntax AVG ( [ ALL | DISTINCT ] expression ) Parameters ALL DISTINCT expression Invokes the aggregate function for all rows in the group (default) Invokes the aggregate function for all distinct non-null values of the expression found in the group (INTEGER. BIGINT.

If either parameter is NULL. SELECT IMPLEMENT_TEMP_DESIGN(''). the function ignores the null value and extends the value 'f' to 'f0'. Note that the aggregate functions treat '0xff' like '0xFF00': INSERT INTO t values(HEX_TO_BINARY('0xFFFF')). INSERT INTO t values(HEX_TO_BINARY('0xFF')). issue a bitwise AND operation on the first two binary columns. Query table t to see column c output: SELECT TO_HEX(c) FROM t. the return value is also NULL. • • Example First create a sample table and projections with binary columns: CREATE TABLE t (c VARBINARY(2)). For example. the return values are treated as though they are all equal in length and are right-extended with zero bytes. to_hex -------- -113- . If the columns are different lengths.SQL Functions BIT_AND Performs a bitwise logical AND operation on two BINARY data type columns. to_hex -------ff ffff f00f (3 rows) Finally. given a group containing the hex values 'ff'. INSERT INTO t values(HEX_TO_BINARY('0xF00F')). Syntax BIT_AND( column1. column2 ) Parameters column Are the two input BINARY data type columns. BIT_AND operates on VARBINARY types explicitly and operates on BINARY types implicitly through casts. and 'f'. Usage • • The function returns the value of the bitwise logical AND operation. null. 'ff00' and 'ffff': SELECT TO_HEX(BIT_AND(c)) FROM t.

the return values are treated as though they are all equal in length and are right-extended with zero bytes. SELECT IMPLEMENT_TEMP_DESIGN(''). null. Query table t to see column c output: SELECT TO_HEX(c) FROM t. Note that the aggregate functions treat '0xff' like '0xFF00': INSERT INTO t values(HEX_TO_BINARY('0xFFFF')). Syntax BIT_OR( column1. the return value is also NULL.SQL Reference Manual f000 (1 row) BIT_OR Performs a bitwise logical OR operation on two BINARY data type columns. and 'f'. the function ignores the null value and extends the value 'f' to 'f0'. • • Example First create a sample table and projections with binary columns: CREATE TABLE t (c VARBINARY(2)). to_hex -114- . BIT_OR operates on VARBINARY types explicitly and operates on BINARY types implicitly through casts. INSERT INTO t values(HEX_TO_BINARY('0xFF')). INSERT INTO t values(HEX_TO_BINARY('0xF00F')). If the columns are different lengths. If either parameter is NULL. given a group containing the hex values 'ff'. Usage • • The function returns the value of the bitwise logical OR operation. For example. to_hex -------ff ffff f00f (3 rows) Finally. issue a bitwise OR operation: SELECT TO_HEX(BIT_OR(c)) FROM t. column2 ) Parameters column Are the two input BINARY data type columns.

Syntax BIT_XOR( column1. INSERT INTO t values(HEX_TO_BINARY('0xF00F')). BIT_XOR operates on VARBINARY types explicitly and operates on BINARY types implicitly through casts. column2 ) Parameters column Are the two input BINARY data type columns. INSERT INTO t values(HEX_TO_BINARY('0xFF')).SQL Functions -------ffff (1 row) BIT_XOR Performs a bitwise logical XOR operation on two BINARY data type columns. Usage • • The function returns the value of the bitwise logical XOR operation. the return value is also NULL. the function ignores the null value and extends the value 'f' to 'f0'. If the columns are different lengths. For example. null. SELECT IMPLEMENT_TEMP_DESIGN(''). • • Example First create a sample table and projections with binary columns: CREATE TABLE t (c VARBINARY(2)). issue a bitwise XOR operation: SELECT TO_HEX(BIT_XOR(c)) FROM t. the return values are treated as though they are all equal in length and are right-extended with zero bytes. to_hex -------ff ffff f00f (3 rows) Finally. If either parameter is NULL. Query table t to see column c output: SELECT TO_HEX(c) FROM t. and 'f'. Note that the aggregate functions treat '0xff' like '0xFF00': INSERT INTO t values(HEX_TO_BINARY('0xFFFF')). to_hex -115- . given a group containing the hex values 'ff'.

count ------173 31 -116- . SELECT COUNT (DISTINCT date_key + product_key) FROM inventory_fact. Syntax COUNT ( [ ALL | DISTINCT ] expression ) Parameters ALL DISTINCT expression Invokes the aggregate function for all rows in the group (default) Invokes the aggregate function for all distinct non-null values of the expression found in the group (Any data type) contains at least one column reference (see "Column References" on page 74) Examples The following query returns the number of distinct values in the primary_key column of the date_dimension table: SELECT COUNT (DISTINCT date_key) FROM date_dimension. count ------21560 (1 row) An equivalent query is as follows (we use the LIMIT key to cut back on the number of rows returned): SELECT COUNT(date_key + product_key) FROM inventory_fact GROUP BY date_key LIMIT 10. count ------1826 (1 row) The next example returns all distinct values of evaluating the expression x+y for all records of fact.SQL Reference Manual -------f0f0 (1 row) COUNT Returns the number of rows in each group of the result set for which the expression is not null. The return value is a BIGINT.

product_key | count -------------+------1 | 1 2 | 1 3 | 1 4 | 1 5 | 1 6 | 1 7 | 1 8 | 1 9 | 1 10 | 1 (10 rows) -117- . SELECT product_key. product_key | count -------------+------1 | 12 2 | 18 3 | 13 4 | 17 5 | 11 6 | 14 7 | 13 8 | 17 9 | 15 10 | 12 (10 rows) This query counts each distinct product_key value in table inventory_fact with the constant "1".SQL Functions 321 113 286 84 244 238 145 202 (10 rows) Each distinct product_key value in table inventory_fact and returns the number of distinct values of date_key in all records with the specific distinct product_key value. COUNT (DISTINCT product_key) FROM inventory_fact GROUP BY product_key LIMIT 10. SELECT product_key. COUNT (DISTINCT date_key) FROM inventory_fact GROUP BY product_key LIMIT 10. Note: The DISTINCT keyword is redundant if all members of the SELECT list are present in the GROUP BY list as well.

and then sums all qty_in_stock -118- . COUNT (DISTINCT date_key).SQL Reference Manual This query selects each distinct date_key value and counts the number of distinct product_key values for all records with the specific product_key value. COUNT (DISTINCT product_key). date_key | count | sum ----------+-------+-------1 | 173 | 88953 2 | 31 | 16315 3 | 318 | 156003 4 | 113 | 53341 5 | 285 | 148380 6 | 84 | 42421 7 | 241 | 119315 8 | 238 | 122380 9 | 142 | 70151 10 | 202 | 95274 (10 rows) This query selects each distinct product_key value and then counts the number of distinct date_key values for all records with the specific product_key value and counts the number of distinct warehouse_key values in all records with the specific product_key value. SUM(qty_in_stock) FROM inventory_fact GROUP BY date_key LIMIT 10. It then sums of all the qty_in_stock values in all records with the specific product_key value and groups the results by date_key. SELECT product_key. SELECT date_key. COUNT (DISTINCT warehouse_key) FROM inventory_fact GROUP BY product_key LIMIT 15. product_key | count | count -------------+-------+------1 | 12 | 12 2 | 18 | 18 3 | 13 | 12 4 | 17 | 18 5 | 11 | 9 6 | 14 | 13 7 | 13 | 13 8 | 17 | 15 9 | 15 | 14 10 | 12 | 12 11 | 11 | 11 12 | 13 | 12 13 | 9 | 7 14 | 13 | 13 15 | 18 | 17 (15 rows) This query selects each distinct product_key value. counts the number of distinct date_key and warehouse_key values for all records with the specific product_key value.

SUM (qty_in_stock). The return value is a BIGINT. Syntax COUNT(*) Parameters * Indicates that the count does not apply to any specific column or expression in the select list Notes COUNT(*) requires a FROM Clause (page 384).SQL Functions values in records with the specific product_key value. count ------100 (1 row) The next example returns the total number of vendors: -119- . COUNT (product_version) FROM inventory_fact GROUP BY product_key LIMIT 15. It then returns the number of product_version values in records with the specific product_key value. SELECT product_key. product_key | count | count | sum | count -------------+-------+-------+-------+------1 | 12 | 12 | 5530 | 12 2 | 18 | 18 | 9605 | 18 3 | 13 | 12 | 8404 | 13 4 | 17 | 18 | 10006 | 18 5 | 11 | 9 | 4794 | 11 6 | 14 | 13 | 7359 | 14 7 | 13 | 13 | 7828 | 13 8 | 17 | 15 | 9074 | 17 9 | 15 | 14 | 7032 | 15 10 | 12 | 12 | 5359 | 12 11 | 11 | 11 | 6049 | 11 12 | 13 | 12 | 6075 | 13 13 | 9 | 7 | 3470 | 9 14 | 13 | 13 | 5125 | 13 15 | 18 | 17 | 9277 | 18 (15 rows) COUNT(*) Returns the number of rows in each group of the result set. Examples The following example returns the number of warehouses from the warehouse dimension table: SELECT COUNT(warehouse_name) FROM warehouse_dimension. COUNT (DISTINCT date_key). COUNT (DISTINCT warehouse_key).

string. highest_sale -------------600 (1 row) MIN Returns the smallest value of an expression over a group of rows. The return value is the same as the expression data type. binary. -120- . string. Syntax MIN ( [ ALL | DISTINCT ] expression ) Parameters ALL | DISTINCT expression Are meaningless in this context (Any numeric. binary. SELECT MAX(sales_dollar_amount) AS Highest_Sale FROM store.store_sales_fact. or date/time type) contains at least one column reference (see "Column References" on page 74) Example This example returns the largest value (dollar amount) of the sales_dollar_amount column.SQL Reference Manual SELECT COUNT(*) FROM vendor_dimension. or date/time type) contains at least one column reference (see "Column References" on page 74) Example This example returns the lowest salary. count ------50 (1 row) MAX Returns the greatest value of an expression over a group of rows. Syntax MAX ( [ ALL | DISTINCT ] expression ) Parameters ALL | DISTINCT expression Are meaningless in this context (Any numeric. The return value is the same as the expression data type.

STDDEV_SAMP(expression) = SQRT(VAR_SAMP(expression)) Syntax STDDEV_SAMP( expression ) Parameters expression (INTEGER. BIGINT. or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT STDDEV_SAMP(household_id) FROM customer_dimension. BIGINT. lowest_paid ------------1200 (1 row) STDDEV The non-standard function STDDEV is provided for compatibility with other databases. DOUBLE PRECISION. DOUBLE PRECISION. stddev_samp -----------------8651.50842400771 (1 row) See Also STDDEV_SAMP (page 122) STDDEV_POP Evaluates the statistical population standard deviation for each member of the group. STDDEV_POP(expression) = SQRT(VAR_POP(expression)) Syntax STDDEV_POP ( expression ) Parameters expression (INTEGER.SQL Functions SELECT MIN(annual_salary) AS Lowest_Paid FROM employee_dimension. It is semantically identical to STDDEV_SAMP. Evaluates the statistical sample standard deviation for each member of the group. or INTERVAL) contains at least one column reference (see "Column References" on page 74) -121- .

stddev_samp -----------------8651.50842400771 (1 row) SUM Computes the sum of an expression over a group of rows. BIGINT. Otherwise. or INTERVAL) contains at least one column reference (see "Column References" on page 74) -122- . STDDEV_SAMP(expression) = SQRT(VAR_SAMP(expression)) Syntax STDDEV_SAMP( expression ) Parameters expression (INTEGER. DOUBLE PRECISION.SQL Reference Manual Examples SELECT STDDEV_POP(household_id) FROM customer_dimension. BIGINT. It returns a DOUBLE PRECISION value for a floating-point expression. Syntax SUM ( [ ALL | DISTINCT ] expression ) Parameters ALL DISTINCT expression Invokes the aggregate function for all rows in the group (default) Invokes the aggregate function for all distinct non-null values of the expression found in the group (INTEGER.41895973367 (1 row) STDDEV_SAMP Evaluates the statistical sample standard deviation for each member of the group. DOUBLE PRECISION. or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT STDDEV_SAMP(household_id) FROM customer_dimension. stddev_samp -----------------8651. the return value is the same as the expression data type.

SQL Functions Notes If you encounter overflow when using SUM. divided by the number of rows remaining. use SUM_FLOAT() (page 123) which converts to floating point. Syntax VAR_POP ( expression ) -123- . or INTERVAL) contains at least one column reference (see "Column References" on page 74) Example SELECT SUM_FLOAT(average_competitor_price) AS cost FROM product_dimension. regardless of the expression type. cost --------9042850 (1 row) SUM_FLOAT Computes the sum of an expression over a group of rows. cost ---------18181102 (1 row) VAR_POP Evaluates the statistical population variance for each member of the group. SELECT SUM(product_cost) AS cost FROM product_dimension. BIGINT. It returns a DOUBLE PRECISION value for the expression. Syntax SUM_FLOAT ( [ ALL | DISTINCT ] expression ) Parameters ALL DISTINCT expression Invokes the aggregate function for all rows in the group (default) Invokes the aggregate function for all distinct non-null values of the expression found in the group (INTEGER. DOUBLE PRECISION. Example This example returns the total sum of the product_cost column. This is defined as the sum of squares of the difference of expression from the mean of expression.

This is defined as the sum of squares of the difference of expression from the mean of expression.0106764 (1 row) VARIANCE The non-standard function VARIANCE is provided for compatibility with other databases.SQL Reference Manual Parameters expression (INTEGER. var_pop -----------------74847050. or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT VAR_POP(household_id) FROM customer_dimension.0168393 (1 row) VAR_SAMP Evaluates the sample variance for each row of the group. Evaluates the sample variance for each row of the group. This is defined as the sum of squares of the difference of expression from the mean of expression. DOUBLE PRECISION. It is semantically identical to VAR_SAMP. divided by the number of rows remaining minus 1 (one). DOUBLE PRECISION. BIGINT. Syntax VAR_SAMP ( expression ) Parameters expression (INTEGER. Syntax VAR_SAMP ( expression ) -124- . or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT VAR_SAMP(household_id) FROM customer_dimension. BIGINT. divided by the number of rows remaining minus 1 (one). var_samp -----------------74848598.

however.0106764 (1 row) See Also VAR_SAMP (page 124) Analytic Functions The ANSI SQL 99 standard introduced a set of functionality to handle complex analysis and reporting. ) -125- ... This group of rows is called a window and is defined by the analytic clause OVER(). that is. analytic functions differ from aggregate functions (page 112) in that they return multiple rows for each group. expr ]. these functions provide special syntax to show. analytic functions can be used in a subquery or in the parent query. expr ]. Analytic functions: • • • • Require an OVER() clause. Occur only in the SELECT and ORDER BY clauses. var_samp -----------------74848598. they cannot be used in place of an expression. or INTERVAL) contains at least one column reference (see "Column References" on page 74) Examples SELECT VAR_SAMP(household_id) FROM customer_dimension. Are not allowed in the WHERE clause. Before OLAP.. Note: Vertica does not currently support window customization using the OVER() clause. and the size of the window is based on either logical intervals (such as time) or on a physical number of records. for example. Called Online Analytical Processing (OLAP).. Using a unique construct called a sliding window. DOUBLE PRECISION. BIGINT. these functions required elaborate subqueries and many self-joins. Analytic Function Syntax analytic_function( [ arguments ] ) OVER [ PARTITION BY { expr [. Cannot be nested. a moving average of retail volume over a discrete period of time.SQL Functions Parameters expression (INTEGER. which were complex and slow. Each record contains a sliding window that determines the range of input rows used to perform calculations on the current row. | ( expr [. except for FIRST_VALUE/LAST_VALUE (page 128).

GROUP BY. Sorts the rows in the partition and generates an ordered set of rows that is then used as input to the windowing clause (if present). function expression. If the partition_clause is omitted. and HAVING clauses have been evaluated. Note: Vertica provides limited windowing support for FIRST_VALUE/LAST_VALUE (page 128). WHERE.. The analytic order_clause specifies whether data is returned in ascending or descending order and specifies where null values appear in the sorted result as either first or last. for example. OVER() can contain an optional partition clause. a constant. all input rows are treated as a single partition. [ windowing_clause ] ] Vertica supports the following analytic and reporting functions: • • • • • FIRST_VALUE/LAST_VALUE (page 128) LEAD/LAG (page 131) RANK/DENSE_RANK (page 135) ROW_NUMBER (page 137) The OVER() clause with PARTITION BY and ORDER BY Analytic Syntactic Construct OVER() Is required for analytic functions and indicates that the function operates on a query result set.. column. an optional ordering clause and — in the case of some analytic functions — a windowing clause. or expressions involving any of these. or to both. nonanalytic function. to the analytical function. Specifies the ordering sequence as ascending (default) or descending. The result set is the rows that are returned after the FROM. partition_clause Divides the rows in the input relation by a given list of columns (or expressions).SQL Reference Manual } ] [ ORDER BY { expr } [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ] [. { expr } [ ASC | DESC ] [ NULLS FIRST | NULLS LAST ] ]. expr order_clause ASC | DESC -126- . Is the expression to evaluate.

-127- . DATE. that column a had been defined as INTEGER. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. while for columns of data type FLOAT. STRING. null values are placed at the end (NULLS LAST). The order makes nulls compare either high or low with respect to non-null values. The projection created for table t is also sorted on column x so Vertica can eliminate the ORDER BY clause and execute the query quickly. however. To modify the above query using (x INT). Vertica cannot eliminate the sort because the projection sort order for integers (ASC + NULLS FIRST) does not match the default ordering (ASC + NULLS LAST). Note: Analytic functions operate on rows in the order specified by the function's order_clause. In the following example. as specified in the OVER(ORDER BY) clause. Then it evaluates RANK(): CREATE TABLE t ( x FLOAT. ASC NULLS FIRST implies that nulls are smaller than other non-null values.SQL Functions NULLS FIRST | LAST Indicates the position of nulls in the ordered sequence as either first or last. Vertica sorts inputs from table t on column x. TIME. SELECT x. null values are placed at the beginning of sorted projections (NULLS FIRST).y) AS SELECT * FROM t ORDER BY x. that omitting NULLS LAST from the (x STRING) query still eliminates the sort because ASC + NULLS LAST is the default sort specification for both the analytical ORDER BY clause and for string-related columns in Vertica. the analytic order_clause does not guarantee the order of the SQL result. If column x is a STRING data type.y UNSEGMENTED ALL NODES. and INTERVAL. y FLOAT ). If the sequence is specified as ascending order. Performance Optimization for Analytic Sort Computation Vertica stores data in projections sorted in a specific way. Because column x is a FLOAT data type and the projection sort order matches the default ordering (ASC + NULLS LAST). However. The default is ASC NULLS LAST and DESC NULLS FIRST. the following query would eliminate the sort: SELECT x. ASC NULLS LAST implies that nulls are larger than non-null values. however. For columns of data type NUMERIC. Assume. RANK() OVER (ORDER BY x) FROM t. Vertica can eliminate the sort. TIMESTAMP. CREATE PROJECTION t_p (x. specify the placement of nulls to match default ordering: SELECT x. INTEGER. RANK() OVER (ORDER BY x NULLS LAST) FROM t. Note. The opposite is true for descending order. RANK() OVER (ORDER BY x NULLS FIRST) FROM t. and BOOLEAN.

In Vertica: • • For NUMERIC. Summary The analytic ORDER BY clause orders the data used by the analytic function as either ascending (ASC) or descending (DESC) and specifies where null values appear in the sorted result as either NULLS FIRST or NULLS LAST. TIMESTAMP. For FLOAT. With DESC + NULLS FIRST. specifies only ascending or descending order. column. on the other hand. null values are placed at the end of a sorted projection (NULLS LAST). or NULL if all values are NULL. you can carefully formulate the SQL query to eliminate the sort and. LAST_VALUE returns values from the last row of a window. null values are placed at the beginning of a sorted projection (NULLS FIRST). FIRST_VALUE / LAST_VALUE FIRST_VALUE returns values from the first row of a window. Syntax { FIRST_VALUE | LAST_VALUE }( expr [ IGNORE NULLS ] ) OVER ( [ partition_clause ] [ order_clause ] ) Parameters expr Is the expression to evaluate.SQL Reference Manual So when formulating a SQL query involving analytics computation. TIME. null values are placed at the beginning of the sorted result. INTEGER. When using analytic ORDER BY operations. and BOOLEAN-related columns. Returns the first or last non-null value in the set. nonanalytic function. thereby. a constant. The SQL ORDER BY clause. IGNORE NULLS -128- . for example. or expressions involving any of these. the default sort order is as follows: • • With ASC + NULLS LAST. function expression. choose the faster query. null values are placed at the end of the sorted result. STRING. of if you know the column or columns contain no null values. if you do not care about the placement of your null values. DATE. and INTERVAL-related columns.

can seem non-intuitive because the function does not return the bottom of the current partition. Results. These functions are useful when you want to use the first or last value as a baseline in calculations. Divides the rows in the input relation by a given list of columns (or expressions). the first or last value is returned whether or not it is NULL. and results are returned. to the analytical function. FIRST_VALUE(full_date_description) OVER (PARTITION BY calendar_month_number_in_year ORDER BY day_of_week) AS "first_value" FROM date_dimension WHERE calendar_year=2003 AND calendar_month_number_in_year=1. • • • • Examples The following query. while the LAST_VALUE function takes the record from the partition after the analytic ORDER BY clause. Due to default window semantics. and HAVING clauses have been evaluated). -129- . without having to use a self-join. full_date_description. LAST_VALUE does not always return the last value of a partition. If the partition_clause is omitted. It returns the bottom of the window. or to both.SQL Functions OVER() Is required. WHERE. The expression is then computed against the first or last record. If the windowing clause is omitted from the analytic clause. LAST_VALUE operates on this default window. date_key. GROUP BY. which asks for the first value in the partitioned day of week. Sorts the rows in the partition and generates an ordered set of rows that is then used as input to the windowing clause (if present). Vertica recommends that you use FIRST_VALUE and LAST_VALUE with the analytic ORDER BY clause to produce deterministic results. which continues to change along with the current input row being processed. therefore. The analytic order_clause specifies whether data is returned in ascending or descending order and specifies where null values appear in the sorted result as either first or last. partition_clause order_clause Notes • The FIRST_VALUE and LAST_VALUE functions allow you to select a table's first and last value (according to the ORDER BY clause). Unlike most other aggregate functions (page 112). If IGNORE NULLs is not specified. FIRST_VALUE and LAST_VALUE can be used only with a window function whose default window is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. day_of_week. The FIRST_VALUE function takes the first record from the window. Indicates that the function operates on a query result set (the rows that are returned after the FROM. illustrates the potential nondeterministic nature of the FIRST_VALUE function: SELECT calendar_year. all input rows are treated as a single partition.

2003 | January 31. 2003 2003 | 1 | Wednesday | January 1. and any one of those rows qualifies as being the first value of day_of_week. 2003 | January 2003 | 24 | Friday | January 24. 31. 2003 2003 | 22 | Wednesday | January 22. starting with Sunday followed by Monday. 2003 | January 31. 2003 | January 31. such as date_key: SELECT calendar_year. 2003 | January 31. 2003 2003 | 20 | Monday | January 20. because rows that contain ties can be ordered in any way. 2003 2003 | 28 | Tuesday | January 28. 2003 2003 | 2 | Thursday | January 2.SQL Reference Manual The first value returned is January 1. 2003 | January 2003 | 10 | Friday | January 10. 2003 | January 31. 2003 | January 31. 2003 2003 | 19 | Sunday | January 19. 2003 | January 31. 2003 2003 | 18 | Saturday | January 18. The fact that each day does not appear ordered by the 7-day week cycle (for example. 2003 (31 rows) 31. 2003. 2003 | January 31. the next time the same query is run. 2003 2003 | 9 | Thursday | January 9. 2003 | January 31. FIRST_VALUE(full_date_description) OVER (PARTITION BY calendar_month_number_in_year ORDER BY date_key) AS "first_value" FROM date_dimension WHERE calendar_year=2003. 2003 | January 2003 | 6 | Monday | January 6. The reason is because the analytic ORDER BY column (day_of_week) returns rows that contain ties (multiple Fridays). To return deterministic results. 2003 2003 2003 2003 2003 Note: The day_of_week results are returned in alphabetical order because of lexical rules. 2003 2003 | 16 | Thursday | January 16. 2003 2003 | 21 | Tuesday | January 21. 2003 | January 31. modify the query so that it performs its analytic ORDER BY operations on a unique field. 2003 | January 31. 2003 2003 | 14 | Tuesday | January 14. 2003 | January 31. day_of_week. 31. and so on) has no affect on results. 2003 2003 | 8 | Wednesday | January 8. 2003 2003 | 23 | Thursday | January 23. however. These repeated values make the ORDER BY evaluation result nondeterministic. 2003 | January 31. 2003 2003 | 13 | Monday | January 13. full_date_description. Also. 2003 | January 31. 2003 | January 31. 2003 | January 31. 2003 | January 31. 2003 2003 | 30 | Thursday | January 30. 2003 2003 | 25 | Saturday | January 25. 2003 2003 | 11 | Saturday | January 11. the first value could be January 24 or January 3. 2003 | January 31. 2003 | January 31. 2003 | January 31. 2003 | January 31. 2003 2003 | 27 | Monday | January 27. 2003 2003 | 15 | Wednesday | January 15. Notice that the results return a first value of January 1 for the January partition and the first value of February 1 for the February partition. 31. 2003 2003 | 29 | Wednesday | January 29. date_key. 2003 2003 | 7 | Tuesday | January 7. 2003 2003 | 26 | Sunday | January 26. 31. Tuesday. 2003 | January 31. 2003 | January 31. 2003 | January 31. 2003 | January 2003 | 17 | Friday | January 17. 2003 2003 | 5 | Sunday | January 5. there are no ties in the full_date_description column: calendar_year | date_key | day_of_week | full_date_description | first_value ---------------+----------+-------------+-----------------------+------------------- -130- . 2003 | January 31. 2003 | January 31. or the 10th or 17th. 2003 2003 | 4 | Saturday | January 4. calendar_year | date_key | day_of_week | full_date_description | first_value ---------------+----------+-------------+-----------------------+-----------------2003 | 31 | Friday | January 31. 2003 | January 2003 | 3 | Friday | January 3. 2003 2003 | 12 | Sunday | January 12.

1. LEAD provides access to a row at a given offset after the current row. 2003 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | January January January January January January January January January January January January January January January January January January January January January January January January January January January January January January | January 1. 2003 19. 2003 18. 1. 2003 20. 2003 24. 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 32 | Saturday 33 | Sunday | February 1. 2003 | February 1. 1. 2003 8. 2003 17. 2003 15. 1. 2003 22. Syntax { LEAD | LAG } ( expr [. 2003 7. 2003 6. 1. 1. -131- . 1. 2003 25. 2003 10. 1. column. 2003 27. 2003 3. 2003 21.SQL Functions 1 | Wednesday 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sunday Monday Tuesday Wednesday Thursday Friday | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 2003 . 1. 1. 2003 31. 2003 9. (365 rows) | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | January 1. 1. 2003 | February 2. 1. 2003 See Also TIME_SLICE (page 161) LEAD / LAG LEAD and LAG return values from the row after and before the current row. respectively. nonanalytic function.. or expressions involving any of these. 1. 2003 26. 2003 | February 1. 2003 13. 2003 16. 1. 1. 2003 23. for example. 2003 30. 1. LAG provides access to a row at a given offset before the current row. default] ) OVER ( [ partition_clause ] order_clause [ ASC | DESC ] [NULLS FIRST | LAST ] ) Parameters expr Is the expression to evaluate. 1. 2003 29. 1. 2003 12. 2003 4. 1.. 1. 1. 1. 2003 14. 2003 11. These functions allow you to access more than one row in a table at the same time and are useful for comparing values when the relative positions of rows can be reliably known. 1. 2003 5. 2003 28. 2003 January January January January January January January January January January January January January January January January January January January January January January January January January January January January January January 2. 1. offset] [. 1. 2003 1. 1. function expression. 1. a constant. 1. 1.

ASC NULLS FIRST implies that nulls are smaller than other non-null values. This optional parameter is the value returned if offset falls outside the bounds of the table or partition. cannot be nested within aggregate functions. Note: Analytic functions operate on rows in the order specified by the function's order_clause. partition_clause order_clause ASC | DESC NULLS FIRST | LAST Notes • • Because you can use LEAD and LAG to access more than one row in a table at the same time. The analytic order_clause specifies whether data is returned in ascending or descending order and specifies where null values appear in the sorted result as either first or last. the analytic order_clause does not guarantee the order of the SQL result. GROUP BY. you can avoid using the more costly self join. The offset parameter must be (or can be evaluated to) a constant positive integer. Indicates the position of nulls in the ordered sequence as either first or last. the data should satisfy the following conditions: -132- . Is NULL. all input rows are treated as a single partition. The order makes nulls compare either high or low with respect to non-null values. to the analytical function. However. Examples Say you want to sum the current balance by date in a table and also sum the previous balance from the last day. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. If the partition_clause is omitted. Analytic functions. ASC NULLS LAST implies that nulls are larger than non-null values. Note: The third input argument must be a constant value or an expression that can be evaluated to a constant. its data type should be coercible to that of the first argument. OVER() Is required. Divides the rows in the input relation by a given list of columns (or expressions). Indicates that the function operates on a query result set (the rows that are returned after the FROM. such as LAG and LEAD. or to both. The default is ASC NULLS LAST and DESC NULLS FIRST. The opposite is true for descending order. Given the inputs that follow. WHERE. Sorts the rows in the partition and generates an ordered set of rows that is then used as input to the windowing clause (if present).SQL Reference Manual offset default Is an optional parameter that defaults to 1. and HAVING clauses have been evaluated). Specifies the ordering sequence as ascending (default) or descending. thereby enhancing query processing speed. If the sequence is specified as ascending order.

CREATE PROJECTION bal_p1 ( month_date. 20. INSERT INTO balances values ('2009-02-26'. Each some_id has the same set of dates. SUM(previous_bal) as previous_bal_sum FROM (SELECT month_date. the following query would not be allowed because LAG is nested inside an aggregate function: SELECT month_date. month_date | current_bal_sum | previous_bal_sum ------------+-----------------+-----------------2009-02-24 | 60 | 0 2009-02-25 | 50 | 60 2009-02-26 | 60 | 50 (3 rows) Using the same example data. INSERT INTO balances values ('2009-02-24'. some_id) AS SELECT * FROM balances UNSEGMENTED ALL NODES. 1). 20. INSERT INTO balances values ('2009-02-24'. current_bal. -133- . INSERT INTO balances values ('2009-02-25'. SUM(current_bal) as current_bal_sum. 1. INSERT INTO balances values ('2009-02-25'. INSERT INTO balances values ('2009-02-25'.SQL Functions • • • For each some_id. INSERT INTO balances values ('2009-02-26'. 3). some_id INT ). 2). INSERT INTO balances values ('2009-02-24'. current_bal. SUM(LAG(current_bal. 1). current_bal INT. 3). that is. CREATE TABLE balances ( month_date DATE. 10. 20. 10. there is exactly 1 row for each date represented by month_date. 30. there should also be a row for February 25. 20. LAG(current_bal. 3). 0) OVER (PARTITION BY some_id ORDER BY month_date) AS previous_bal FROM balances) AS subQ GROUP BY month_date ORDER BY month_date. 1). Now execute the LAG() function to sum the current balance for each date and sum the previous balance from the last day: SELECT month_date. INSERT INTO balances values ('2009-02-26'.0) OVER (PARTITION BY some_id ORDER BY month_date)) AS previous_bal_sum FROM some_table GROUP BY month_date ORDER BY month_date. 30. 2). SUM(current_bal) as current_bal_sum. For each some_id.1. 10. the set of dates is consecutive. 2). if there is a row for February 24 and a row for February 26.

the LEAD function finds the hire date of the employee hired just after the current row: SELECT employee_region. Moore | 677433 | 677050 | 383 Accountant | 550 | Sam P. 1. Farmer | 70574 | 70449 | 125 Accountant | 270 | Jessica S. LEAD(hire_date. customer_key. Gauthier | 29033 | 28412 | 621 Accountant | 338 | Anna S. LAG(annual_income. employee_region | hire_date | employee_key | employee_last_name | next_hired -------------------+------------+--------------+--------------------+----------East | 1956-04-08 | 9218 | Harris | 1957-02-06 East | 1957-02-06 | 7799 | Stein | 1957-05-25 East | 1957-05-25 | 3687 | Farmer | 1957-06-26 East | 1957-06-26 | 9474 | Bauer | 1957-08-18 East | 1957-08-18 | 570 | Jefferson | 1957-08-24 East | 1957-08-24 | 4363 | Wilson | 1958-02-17 East | 1958-02-17 | 6457 | McCabe | 1958-06-26 East | 1958-06-26 | 6196 | Li | 1958-07-16 East | 1958-07-16 | 7749 | Harris | 1958-09-18 East | 1958-09-18 | 9678 | Sanchez | 1958-11-10 (10 rows) The next example uses both LEAD and LAG to return the third row after the salary in the current row and fifth salary before the salary in the current row. employee_key. Lang | 684204 | 682274 | 1930 Accountant | 273 | Mark X. annual_income. hire_date. Jackson | 816858 | 815557 | 1301 Accountant | 377 | William I. Peterson | 692610 | 692535 | 75 Accountant | 43 | Midori S.SQL Reference Manual In the next example. employee_last_name. hire_date. 0) OVER (PARTITION BY occupation ORDER BY annual_income) AS difference FROM customer_dimension ORDER BY occupation. 1) OVER (PARTITION BY employee_region ORDER BY hire_date) AS "next_hired" FROM employee_dimension ORDER BY employee_region. employee_key. Overstreet | 705146 | 704335 | 811 Accountant | 165 | James C. occupation | customer_key | customer_name | annual_income | prev_income | difference ------------+--------------+----------------------+---------------+-------------+-----------Accountant | 15 | Midori V. -134- . annual_income LAG(annual_income. Kramer | 376841 | 376474 | 367 Accountant | 225 | Ben W. 0) OVER (PARTITION BY occupation ORDER BY annual_income) AS prev_income. Reyes | 735691 | 735355 | 336 Accountant | 577 | Robert U. McCabe | 147396 | 144482 | 2914 Accountant | 452 | Kim P. Greenwood | 639649 | 639029 | 620 Accountant | 511 | Midori P. so we'll limit the results to 20 records: SELECT occupation. Rodriguez | 282359 | 280976 | 1383 Accountant | 93 | Robert P. Jones | 915149 | 914872 | 277 Accountant | 438 | Joanna A. Campbell | 471722 | 471355 | 367 Accountant | 102 | Sam T. Vogel | 187246 | 185539 | 1707 Accountant | 525 | Alexander K.000 rows. customer_name. customer_key LIMIT 20. 1. McNulty | 901636 | 901561 | 75 Accountant | 134 | Martha B. Vu | 616101 | 615439 | 662 (20 rows) In this example. Lampert | 723294 | 722737 | 557 Accountant | 295 | Sharon K. Carcetti | 810528 | 810284 | 244 Accountant | 478 | Tanya E. Note: The vmart example database returns over 50. Brown | 126023 | 124797 | 1226 Accountant | 467 | Meghan K. and then it calculates the difference between the income in the current row from the income in the previous row. the LAG function first returns the annual income from the previous row.

occupation) OVER (PARTITION BY occupation ORDER BY customer_key) LAG1 FROM customer_dimension ORDER BY 3. GROUP BY. Syntax { RANK | DENSE_RANK } ( ) OVER ( [ partition_clause ] order_clause ) [ ASC | DESC ] [NULLS FIRST | LAST ] Parameters OVER() Is required. occupation. this clause specifies the measures expr on which ranking is done and defines the order in which rows are sorted in each group (or partition). WHERE. No rows are skipped if more than one row has same rank. The ranks are consecutive integers beginning with 1. employee_key. ERROR: Third argument of lag could not be converted from type character varying to type int8 HINT: You may need to add explicit type cast. DENSE_RANK computes the rank of a row in an ordered group of rows and returns the rank as a NUMBER. employee_last_name. employee_key. The largest rank value is the number of unique values returned by the query. In ranking functions. 1) OVER (ORDER BY hire_date) AS "next_hired" . for example annual_income(INT) and occupation(VARCHAR).SQL Functions SELECT hire_date. Indicates that the function operates on a query result set (the rows that are returned after the FROM. The return type is NUMBER. 1. 1) OVER (ORDER BY hire_date) AS "last_hired" FROM employee_dimension ORDER BY hire_date. customer_name. Once the data is sorted within each partition. The query returns an error: SELECT customer_key. annual_income. RANK / DENSE_RANK RANK computes the rank of a value in a group of values. LEAD(hire_date. hire_date | employee_key | employee_last_name | next_hired | last_hired ------------+--------------+--------------------+------------+-----------1956-04-11 | 2694 | Farmer | 1956-05-12 | 1956-05-12 | 5486 | Winkler | 1956-09-18 | 1956-04-11 1956-09-18 | 5525 | McCabe | 1957-01-15 | 1956-05-12 1957-01-15 | 560 | Greenwood | 1957-02-06 | 1956-09-18 1957-02-06 | 9781 | Bauer | 1957-05-25 | 1957-01-15 1957-05-25 | 9506 | Webber | 1957-07-04 | 1957-02-06 1957-07-04 | 6723 | Kramer | 1957-07-07 | 1957-05-25 1957-07-07 | 5827 | Garnett | 1957-11-11 | 1957-07-04 1957-11-11 | 373 | Reyes | 1957-11-21 | 1957-07-07 1957-11-21 | 3874 | Martin | 1958-02-06 | 1957-11-11 (10 rows) The following example specifies arguments that use different data types. -135- . LAG(hire_date. 1. and HAVING clauses have been evaluated). LAG (annual_income. ranks are given to each row starting from 1. Rank values are skipped in the event of a tie — when more than one row has the same rank.

all input rows are treated as a single partition. both RANK and DENSE_RANK place both the records in the third position only. • Examples This example ranks the longest-standing customers in Massachusetts. the analytic order_clause does not guarantee the order of the SQL result. Null values are considered larger than any other values. If there is a tie at the third position with two records having the same value. Sorts the rows in the partition and generates an ordered set of rows that is then used as input to the windowing clause (if present). to the analytical function. Notes • The primary difference between RANK and DENSE_RANK is that RANK leaves gaps when ranking records. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. DENSE_RANK leaves no gaps. -136- . Then within each region. order_clause ASC | DESC NULLS FIRST | LAST Note: Analytic functions operate on rows in the order specified by the function's order_clause. the ordering of the null values depends on the ASC or DESC arguments. The analytic order_clause specifies whether data is returned in ascending or descending order and specifies where null values appear in the sorted result as either first or last. the order in which nulls are presented is non-deterministic. The order makes nulls compare either high or low with respect to non-null values. If the partition_clause is omitted. ASC NULLS LAST implies that nulls are larger than non-null values. DENSE_RANK places all the records in that position only — it does not leave a gap for the next rank. If the ordering sequence is ASC. If the sequence is specified as ascending order. If you omit NULLS FIRST | LAST. if more than one record occupies a particular position (a tie). nulls appear first otherwise. ASC NULLS FIRST implies that nulls are smaller than other non-null values. the query ranks customers over the age of 70. RANK places all those records in that position and it places the next record after a gap of the additional records (it skips one). The query first computes the customer_since column by region. The opposite is true for descending order. and then partitions the results by customers with businesses in MA. Nulls are considered equal to other nulls and. The default is ASC NULLS LAST and DESC NULLS FIRST. However. therefore.SQL Reference Manual partition_clause Divides the rows in the input relation by a given list of columns (or expressions). Indicates the position of nulls in the ordered sequence as either first or last. but RANK has the next record at the fifth position — leaving a gap of 1 position — while DENSE_RANK places the next record at the forth position (no gap). or to both. then nulls appear last. For example. Specifies the ordering sequence as ascending (default) or descending.

to each row within a partition.SQL Functions SELECT customer_type. Vu | 99542 | 8 | 8 Theodore T. starting from 1. Farmer | 99826 | 3 | 3 Jose V. RANK() OVER (PARTITION BY customer_region ORDER BY customer_since) as rank FROM customer_dimension WHERE customer_state = 'MA' AND customer_age > '70'.'100000') DESC) dense_rank FROM customer_dimension GROUP BY customer_name LIMIT 15. while DENSE RANK leaves no gaps in the ranking sequence: SELECT customer_name. Nguyen | 99604 | 6 | 6 Sarah G. Garnett | 99838 | 1 | 1 Tanya A. customer_type | customer_name | rank ---------------+---------------+-----Company | Virtadata | 1 Company | Evergen | 2 Company | Infocore | 3 Company | Goldtech | 4 Company | Veritech | 5 Company | Inishop | 6 Company | Intracom | 7 Company | Virtacom | 8 Company | Goldcom | 9 Company | Infostar | 10 Company | Golddata | 11 Company | Everdata | 12 Company | Goldcorp | 13 (13 rows) The following example shows the difference between RANK and DENSE_RANK when ranking customers by their annual income. Brown | 99497 | 10 | 10 Matt X. Wilson | 99276 | 14 | 13 Tiffany A. Sanchez | 99673 | 4 | 4 Marcus D.'100000') DESC) rank. Rodriguez | 99631 | 5 | 5 Alexander T. Lewis | 99556 | 7 | 7 Ruth Q. customer_name | sum | rank | dense_rank ---------------------+-------+------+-----------Brian M. -137- . sequentially. Smith | 99257 | 15 | 14 (15 rows) ROW_NUMBER Assigns a unique number. SUM(annual_income). RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income). Farmer | 99532 | 9 | 9 Daniel P. Gauthier | 99402 | 12 | 11 Rebecca W. customer_name. Li | 99497 | 10 | 10 Seth E. Notice that RANK has a tie at 10 and skips 11. DENSE_RANK () OVER (ORDER BY TO_CHAR(SUM(annual_income). Brown | 99834 | 2 | 2 Tiffany P. Lewis | 99296 | 13 | 12 Dean L.

even if it does not contain the optional order or partition clauses. or to both. Indicates that the function operates on a query result set (the rows that are returned after the FROM.) The OVER() clause is required. WHERE. Divides the rows in the input relation by a given list of columns (or expressions). Indicates the position of nulls in the ordered sequence as either first or last. However. ASC NULLS FIRST implies that nulls are smaller than other non-null values. Sorts the rows in the partition and generates an ordered set of rows that is then used as input to the windowing clause (if present). You can substitute any RANK example for ROW_NUMBER. GROUP BY. col2. partition_clause order_clause ASC | DESC NULLS FIRST | LAST Note: Analytic functions operate on rows in the order specified by the function's order_clause. Use the SQL ORDER BY clause to guarantee the ordering of the final result set. for example: SUM OVER (PARTITION BY col1. -138- . If the partition_clause is omitted. starting with 1. The order makes nulls compare either high or low with respect to non-null values.Results are defined by order_clause. the analytic order_clause does not guarantee the order of the SQL result.SQL Reference Manual Syntax ROW_NUMBER( ) OVER ( [ partition_clause ] [ order_clause ] [ ASC | DESC ] [NULLS FIRST | LAST ] ] ) Parameters OVER() Is required. . The default is ASC NULLS LAST and DESC NULLS FIRST. ASC NULLS LAST implies that nulls are larger than non-null values. if provided. to the analytical function. to each row in the ordered set. The opposite is true for descending order. The difference is that ROW_NUMBER assigns a unique ordinal number. If the sequence is specified as ascending order. Notes • You can use the optional partition clause to group data into partitions before operating on it. • • Examples The following query first partitions customers in the customer_dimension table by occupation and then ranks those customers based on the ordered set specified by the analytic partition_clause... and HAVING clauses have been evaluated). all input rows are treated as a single partition. Specifies the ordering sequence as ascending (default) or descending.

annual_income. these variants are not shown separately. extraction.customer_dimension ORDER BY occupation. occupation | customer_key | customer_since | annual_income | customer_since_row_num --------------------+--------------+----------------+---------------+-----------------------Accountant | 19453 | 1973-11-06 | 602460 | 1 Accountant | 42989 | 1967-07-09 | 850814 | 2 Accountant | 24587 | 1995-05-18 | 180295 | 3 Accountant | 26421 | 2001-10-08 | 126490 | 4 Accountant | 37783 | 1993-03-16 | 790282 | 5 Accountant | 39170 | 1980-12-21 | 823917 | 6 Banker | 13882 | 1998-04-10 | 15134 | 1 Banker | 14054 | 1989-03-16 | 961850 | 2 Banker | 15850 | 1996-01-19 | 262267 | 3 Banker | 29611 | 2004-07-14 | 739016 | 4 Doctor | 261 | 1969-05-11 | 933692 | 1 Doctor | 1264 | 1981-07-19 | 593656 | 2 Psychologist | 5189 | 1999-05-04 | 397431 | 1 Psychologist | 5729 | 1965-03-26 | 339319 | 2 Software Developer | 2513 | 1996-09-22 | 920003 | 1 Software Developer | 5927 | 2001-03-12 | 633294 | 2 Software Developer | 9125 | 1971-10-06 | 198953 | 3 Software Developer | 16097 | 1968-09-02 | 748371 | 4 Software Developer | 23137 | 1988-12-07 | 92578 | 5 Software Developer | 24495 | 1989-04-16 | 149371 | 6 Software Developer | 24548 | 1994-09-21 | 743788 | 7 Software Developer | 33744 | 2005-12-07 | 735003 | 8 Software Developer | 9684 | 1970-05-20 | 246000 | 9 Software Developer | 24278 | 2001-11-14 | 122882 | 10 Software Developer | 27122 | 1994-02-05 | 810044 | 11 Stock Broker | 5950 | 1965-01-20 | 752120 | 1 Stock Broker | 12517 | 2003-06-13 | 380102 | 2 Stock Broker | 33010 | 1984-05-07 | 384463 | 3 Stock Broker | 46196 | 1972-11-28 | 497049 | 4 Stock Broker | 8710 | 2005-02-11 | 79387 | 5 Writer | 3149 | 1998-11-17 | 643972 | 1 Writer | 17124 | 1965-01-18 | 444747 | 2 Writer | 20100 | 1994-08-13 | 106097 | 3 Writer | 23317 | 2003-05-27 | 511750 | 4 Writer | 42845 | 1967-10-23 | 433483 | 5 Writer | 47560 | 1997-04-23 | 515647 | 6 (39 rows) Date/Time Functions Date and time functions perform conversion. both DATE + INTEGER and INTEGER + DATE. We show only one of each such pair. Usage Functions that take TIME or TIMESTAMP inputs come in two variants: • TIME WITH TIME ZONE or TIMESTAMP WITH TIME ZONE • TIME WITHOUT TIME ZONE or TIMESTAMP WITHOUT TIME ZONE For brevity. ROW_NUMBER() OVER (PARTITION BY occupation) AS customer_since_row_num FROM public. The + and * operators come in commutative pairs. customer_since. or manipulation operations on date and time data types and can return date and time information.SQL Functions SELECT occupation. for example. customer_since_row_num. customer_key. -139- .

so that multiple modifications within the same transaction bear the same timestamp. the days component advances (or decrements) the date of the TIMESTAMP WITH TIME ZONE by the indicated number of days. or TIMESTAMPZ. with the session time zone set to CST7CDT: TIMESTAMP WITH TIME ZONE '2005-04-02 12:00-07' + INTERVAL '1 day' produces TIMESTAMP WITH TIME ZONE '2005-04-03 12:00-06' Adding INTERVAL '24 hours' to the same initial TIMESTAMP WITH TIME ZONE produces TIMESTAMP WITH TIME ZONE '2005-04-03 13:00-06'. n -140- . TIMESTAMP. The intent is to allow a single transaction to have a consistent notion of the "current" time. Across daylight saving time changes (with the session time zone set to a time zone that recognizes DST). Can be any integer. However. or if the resulting month has fewer days than the given day of the month. TIMEOFDAY() returns the wall-clock time and advances during transactions. For example. Parameters d Is the incoming DATE. as there is a change in daylight saving time at 2005-04-03 02:00 in time zone CST7CDT. Date/Time Functions in Transactions CURRENT_TIMESTAMP and related functions return the start time of the current transaction. the result has the same start day.SQL Reference Manual Daylight Savings Time Considerations When adding an INTERVAL value to (or subtracting an INTERVAL value from) a TIMESTAMP WITH TIME ZONE value. this means INTERVAL '1 day' does not necessarily equal INTERVAL '24 hours'. or TIMESTAMPTZ argument and a number of months and returns a date. n ). See Also Template Patterns for Date/Time Formatting (page 171) ADD_MONTHS Takes a DATE. TIMESTAMP. Otherwise. their values do not change during the transaction. then the result is the last day of the resulting month. TIMESTAMPTZ arguments are implicitly cast to TIMESTAMP. Syntax SELECT ADD_MONTHS( d . If the start date falls on the last day of the month.

returning a date in September: SELECT ADD_MONTHS('31-Jan-08'. Months -----------2008-02-29 (1 row) The next example adds four months to January and returns a date in May: SELECT ADD_MONTHS('31-Jan-08'. Months -------(1 row) In this example. 24).SQL Functions Notes ADD_MONTHS() is an invariant function if called with DATE or TIMESTAMP but is stable with TIMESTAMPTZ in that its results can change based on TIME ZONE settings. Examples The following example's results include a leap year: SELECT ADD_MONTHS('31-Jan-08'. NULL) "Months". SELECT ADD_MONTHS('2008-02-29 23:30 PST'. Months -----------2007-09-30 (1 row) Because the following example specifies NULL. Months -------(1 row) This example provides no date argument. 1) "Months". 1) "Months". so even though the number of months specified is 1. so the PST is ignored. Months -----------2008-05-31 (1 row) This example subtracts 4 months from January. the result falls on the same date in New York (two years later): SET TIME ZONE 'America/New_York'. the date field defaults to a timestamp. add_months -----------2010-02-28 (1 row) -141- . the result set is empty: SELECT ADD_MONTHS(NULL. -4) "Months". 4) "Months". the result set is empty: SELECT ADD_MONTHS('31-Jan-03'. Notice that even though it is already the next day in Pacific time.

TIMESTAMP '1972-03-02'). 1972) as of February 24. add_months -----------2010-03-01 (1 row) AGE Returns an INTERVAL value representing the difference between two TIMESTAMP values.SQL Reference Manual This example specifies a timestamp with time zone. 1990. 1972 on the date June 21. SELECT ADD_MONTHS('2008-02-29 23:30 PST'::TIMESTAMPTZ. TIMESTAMP '1972-03-02'). age ------------------------18 years 3 mons 19 days (1 row) The next example shows the age of the same person (born March 2. 1939: SELECT AGE(TIMESTAMP '1939-11-21'). The default is the CURRENT_DATE (page 143) Examples The following example returns the age of a person born on March 2. 24). age -------------------------36 years 11 mons 22 days (1 row) This example returns the age of a person born on November 21. and 19 days: SELECT AGE(TIMESTAMP '1990-06-21'. expression2 ] ) Parameters expression1 expression2 (TIMESTAMP) specifies the beginning of the INTERVAL (TIMESTAMP) specifies the end of the INTERVAL. age -----------------------69 years 3 mons 3 days (1 row) -142- . 2008: SELECT AGE(TIMESTAMP '2009-02-24'. 3 months. with a time elapse of 18 years. so the PST is taken into account: SET TIME ZONE 'America/New_York'. Syntax AGE ( expression1 [ .

Syntax CURRENT_DATE Notes The CURRENT_DATE function does not require parentheses. Time 1 | Time 2 -------------------------------+------------------------------2009-03-05 14:42:58. The value changes each time you call it. Examples The following command returns the current time on your system: SELECT CLOCK_TIMESTAMP() "Current Time".697199-05 (1 row) Each time you call the function. which should be the same across all servers.80988-05 (1 row) See Also STATEMENT_TIMESTAMP (page 160) and TRANSACTION_TIMESTAMP (page 165) CURRENT_DATE Returns the date (date-type value) on which the current transaction started. you get a different result.809879-05 | 2009-03-05 14:42:58. Examples SELECT CURRENT_DATE. Syntax CLOCK_TIMESTAMP() Notes This function uses the date and time supplied by the operating system on the server to which you are connected. CLOCK_TIMESTAMP() "Time 2". date -----------2009-02-23 (1 row) -143- . The difference in this example is in microseconds: SELECT CLOCK_TIMESTAMP() "Time 1". Current Time ------------------------------2009-02-24 12:35:39.SQL Functions CLOCK_TIMESTAMP Returns a value of type TIMESTAMP WITH TIME ZONE representing the current system-clock time.

Syntax CURRENT_TIMESTAMP [ ( precision ) ] Parameters precision (INTEGER) causes the result to be rounded to the specified number of fractional digits in the seconds field. Notes This function returns the start time of the current transaction. Current Time -------------------12:45:12. so that multiple modifications within the same transaction bear the same timestamp. Examples SELECT CURRENT_TIME "Current Time". The intent is to allow a single transaction to have a consistent notion of the "current" time. the value does not change during the transaction.186089-05 (1 row) CURRENT_TIMESTAMP Returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction. Syntax CURRENT_TIME [ ( precision ) ] Parameters precision (INTEGER) causes the result to be rounded to the specified number of fractional digits in the seconds field. Examples SELECT CURRENT_TIMESTAMP. the value does not change during the transaction.649951-05 -144- .SQL Reference Manual CURRENT_TIME Returns a value of type TIME WITH TIME ZONE representing the time of day. Notes This function returns the start time of the current transaction. timestamptz ------------------------------2009-02-24 12:49:33. The intent is to allow a single transaction to have a consistent notion of the current time. Range of INTEGER is 0-6. so that multiple modifications within the same transaction bear the same timestamp.

TIMESTAMP '2009-02-24 20:38:40') "Month". Day ----24 (1 row) The following example extracts the month value from the input parameters: SELECT DATE_PART('month'. Note: The field parameter values are the same as EXTRACT (page 153). Month ------2 (1 row) The following example extracts the year value from the input parameters: SELECT DATE_PART('year'. Hour -----20 (1 row) -145- .SQL Functions (1 row) SELECT CURRENT_TIMESTAMP(2). Syntax DATE_PART( field . TIMESTAMP '2009-02-24 20:38:40') "Day". source Is a date/time (page 96) expression Examples The following example extracts the day value from the input parameters: SELECT DATE_PART('day'. Year -----2009 (1 row) The following example extracts the hours from the input parameters: SELECT DATE_PART('hour'. TIMESTAMP '2009-02-24 20:38:40') "Hour". source ) Parameters field Is a single-quoted string value that specifies the field to extract.72-04 (1 row) DATE_PART Is modeled on the traditional Ingres equivalent to the SQL-standard function EXTRACT. TIMESTAMP '2009-02-24 20:38:40') "Year". timestamptz --------------------------2009-03-16 15:41:46.

SQL Reference Manual

The following example extracts the minutes from the input parameters:
SELECT DATE_PART('minutes', TIMESTAMP '2009-02-24 20:38:40') "Minutes"; Minutes --------38 (1 row)

The following example extracts the minutes from the input parameters:
SELECT DATE_PART('seconds', TIMESTAMP '2009-02-24 20:38:40') "Seconds"; Seconds --------40 (1 row) SELECT DATE_PART('day', INTERVAL '29 days 23 hours'); date_part ----------29 (1 row)

Notice what happens to the above query if you add an hour:
SELECT DATE_PART('day', INTERVAL '29 days 24 hours'); date_part ----------0 (1 row)

Similarly, the following example returns 0 because an interval in hours is up to 24 only:
SELECT DATE_PART('hour', INTERVAL '24 hours 45 minutes'); date_part ----------0 (1 row)

DATE_TRUNC
Is conceptually similar to the TRUNC (page 188) function for numbers. The return value is of type TIMESTAMP or INTERVAL with all fields that are less significant than the selected one set to zero (or one, for day and month). Syntax
DATE_TRUNC( field , source )

Parameters
field Is a string constant that selects the precision to which truncate the input value. Valid values for field are: century day milliseconds minute

-146-

SQL Functions

decade hour microsecond s millennium source

month second week year

Is a value expression of type TIMESTAMP or INTERVAL. Values of type DATE and TIME are cast automatically, to TIMESTAMP or INTERVAL, respectively.

Examples The following example returns the hour and truncates the minutes and seconds:
SELECT DATE_TRUNC('hour', TIMESTAMP '2009-02-24 13:38:40') AS hour; hour --------------------2009-02-24 13:00:00 (1 row)

The following example returns the year and defaults month and day to January 1, truncating the rest of the string:
SELECT DATE_TRUNC('year', TIMESTAMP '2009-02-24 13:38:40') AS year; year --------------------2009-01-01 00:00:00 (1 row)

The following example returns the year and month and defaults day of month to 1, truncating the rest of the string:
SELECT DATE_TRUNC('month', TIMESTAMP '2009-02-24 13:38:40') AS year; year --------------------2009-02-01 00:00:00 (1 row)

DATEDIFF
Returns the difference between two date or time values, based on the specified start and end arguments. Syntax 1
SELECT DATEDIFF ( datepart , startdate , enddate );

Syntax 2
SELECT DATEDIFF ( datepart , starttime , endtime );

-147-

SQL Reference Manual

Parameters
datepart Returns the number of specified datepart boundaries between the specified startdate and enddate. Can be an unquoted identifier, a quoted string, or an expression in parentheses, which evaluates to the datepart as a character string. The following table lists the valid datepart arguments. datepart --------year quarter month day week hour minute second millisecond microsecond startdate abbreviation -----------yy, yyyy qq, q mm, m dd, d, dy, dayofyear, y wk, ww hh mi, n ss, s ms mcs, us

Is the start date for the calculation and is an expression that returns a TIMESTAMP (page 99), DATE (page 96), or TIMESTAMPTZ value. The startdate value is not included in the count. Is the end date for the calculation and is an expression that returns a TIMESTAMP (page 99), DATE (page 96), or TIMESTAMPTZ value. The enddate value is included in the count. Is the start time for the calculation and is an expression that returns an INTERVAL (page 102) or TIME (page 97) data type. The starttime value is not included in the count. Year, quarter, or month dateparts are not allowed. Is the end time for the calculation and is an expression that returns an INTERVAL (page 102) or TIME (page 97) data type. The endtime value is included in the count. Year, quarter, or month dateparts are not allowed.

enddate

starttime

endtime

-148-

SQL Functions

Notes • • DATEDIFF() is an immutable1 function with a default type of TIMESTAMP. It also takes DATE. If TIMESTAMPTZ is specified, the function is stable. Vertica accepts statements written in any of the following forms: DATEDIFF(year, s, e); DATEDIFF('year', s, e); If you use an expression, the expression must be enclosed in parentheses: DATEDIFF((expr), s, e); Starting arguments are not included in the count, but end arguments are included.

The datepart boundaries DATEDIFF calculates results according to ticks—or boundaries—within the date range or time range. Results are calculated based on the specified datepart. Let's examine the following statement and its results:
SELECT DATEDIFF('year', TO_DATE('01-01-2005','MM-DD-YYYY'), TO_DATE('12-31-2008','MM-DD-YYYY')); datediff ---------3 (1 row)

In the above example, we specified a datepart of year, a startdate of January 1, 2005 and an enddate of December 31, 2008. DATEDIFF returns 3 by counting the year intervals as follows:
[1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 = 3

The function returns 3, and not 4, because startdate (January 1, 2005) is not counted in the calculation. DATEDIFF also ignores the months between January 1, 2008 and December 31, 2008 because the datepart specified is year and only the start of each year is counted. Sometimes the enddate occurs earlier in the ending year than the startdate in the starting year. For example, assume a datepart of year, a startdate of August 15, 2005, and an enddate of January 1, 2009. In this scenario, less than three years have elapsed, but DATEDIFF counts the same way it did in the previous example, returning 3 because it returns the number of January 1s between the limits:
[1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 = 3

In the following query, Vertica recognizes the full year 2005 as the starting year and 2009 as the ending year.
SELECT DATEDIFF('year', TO_DATE('08-15-2005','MM-DD-YYYY'), TO_DATE('01-01-2009','MM-DD-YYYY'));

The count occurs as follows:
[1] January 1, 2006 + [2] January 1, 2007 + [3] January 1, 2008 + [4] January 1, 2009 = 4

1

Immutable functions return the same answers when provided the same inputs. For example, 2+2 always equals 4.

-149-

SQL Reference Manual

Even though August 15 has not yet occurred in the enddate, the function counts the entire enddate year as one tick or boundary because of the year datepart. Examples Year: In this example, the startdate and enddate are adjacent. The difference between the dates is one time boundary (second) of its datepart, so the result set is 1.
SELECT DATEDIFF('year', TIMESTAMP '2008-12-31 23:59:59', '2009-01-01 00:00:00'); datediff
---------1 (1 row)

Quarters start on January, April, July, and October. In the following example, the result is 0 because the difference from January to February in the same calendar year does not span a quarter:
SELECT DATEDIFF('qq', TO_DATE('01-01-1995','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------0 (1 row)

The next example, however, returns 8 quarters because the difference spans two full years. The extra month is ignored:
SELECT DATEDIFF('quarter', TO_DATE('01-01-1993','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------8 (1 row)

Months are based on real calendar months. The following statement returns 1 because there is month difference between January and February in the same calendar year:
SELECT DATEDIFF('mm', TO_DATE('01-01-2005','MM-DD-YYYY'), TO_DATE('02-02-2005','MM-DD-YYYY')); datediff ---------1 (1 row)

The next example returns a negative value of 1:
SELECT DATEDIFF('month', TO_DATE('02-02-1995','MM-DD-YYYY'), TO_DATE('01-01-1995','MM-DD-YYYY')); datediff ----------1 (1 row)

And this third example returns 23 because there are 23 months difference between

-150-

SQL Functions SELECT DATEDIFF('m', TO_DATE('02-02-1993','MM-DD-YYYY'), TO_DATE('01-01-1995','MM-DD-YYYY')); datediff ---------23 (1 row)

Weeks start on Sunday at midnight. The first example returns 0 because, even though the week starts on a Sunday, it is not a full calendar week:
SELECT DATEDIFF('ww', TO_DATE('02-22-2009','MM-DD-YYYY'), TO_DATE('02-28-2009','MM-DD-YYYY')); datediff ---------0 (1 row)

The following example returns 1 (week); January 1, 2000 fell on a Saturday.
SELECT DATEDIFF('week', TO_DATE('01-01-2000','MM-DD-YYYY'), TO_DATE('01-02-2000','MM-DD-YYYY')); datediff ---------1 (1 row)

In the next example, DATEDIFF() counts the weeks between January 1, 1995 and February 2, 1995 and returns 4 (weeks):
SELECT DATEDIFF('wk', TO_DATE('01-01-1995','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------4 (1 row)

The next example returns a difference of 100 weeks:
SELECT DATEDIFF('ww', TO_DATE('02-02-2006','MM-DD-YYYY'), TO_DATE('01-01-2008','MM-DD-YYYY')); datediff ---------100 (1 row)

Days are based on real calendar days. The first example returns 31, the full number of days in the month of July 2008.
SELECT DATEDIFF('day', 'July 1, 2008', 'Aug 1, 2008'::date); datediff ---------31 (1 row)

Just over two years of days:

-151-

SQL Reference Manual SELECT DATEDIFF('d', TO_TIMESTAMP('01-01-1993','MM-DD-YYYY'), TO_TIMESTAMP('02-02-1995','MM-DD-YYYY')); datediff ---------762 (1 row)

Hours, minutes, and seconds are based on clock time. The first example counts backwards from March 2 to February 14 and returns -384 hours:
SELECT DATEDIFF('hour', TO_DATE('03-02-2009','MM-DD-YYYY'), TO_DATE('02-14-2009','MM-DD-YYYY')); datediff ----------384 (1 row)

Another hours example:
SELECT DATEDIFF('hh', TO_TIMESTAMP('01-01-1993','MM-DD-YYYY'), TO_TIMESTAMP('02-02-1995','MM-DD-YYYY')); datediff ---------18288 (1 row)

This example counts the minutes backwards:
SELECT DATEDIFF('mi', TO_TIMESTAMP('01-01-1993 03:00:45','MM-DD-YYYY HH:MI:SS'), TO_TIMESTAMP('01-01-1993 01:30:21',' MM-DD-YYYY HH:MI:SS')); datediff ----------90 (1 row)

And this example counts the minutes forward:
SELECT DATEDIFF('minute', TO_DATE('01-01-1993','MM-DD-YYYY'), TO_DATE('02-02-1995','MM-DD-YYYY')); datediff ---------1097280 (1 row)

In the following example, the query counts the difference in seconds, beginning at a start time of 4:44 and ending at 5:55 with an interval of 2 days:
SELECT DATEDIFF('ss', TIME '04:44:42.315786', INTERVAL '2 05:55:52.963558'); datediff ---------177070 (1 row)

See Also Date/Time Expressions (page 76)

-152-

SQL Functions

EXTRACT
Retrieves subfields such as year or hour from date/time values and returns values of type DOUBLE PRECISION (page 105). EXTRACT is primarily intended for computational processing, rather than formatting date/time values for display. Syntax
EXTRACT ( field FROM source )

Parameters
field source Is an identifier or string that selects what field to extract from the source value. Is an expression of type DATE, TIMESTAMP, TIME, or INTERVAL.

Note: Expressions of type DATE are cast to TIMESTAMP.

Examples
SELECT EXTRACT (DAY FROM DATE '2008-12-25'); date_part ----------25 (1 row) SELECT EXTRACT (MONTH FROM DATE '2008-12-25'); date_part ----------12 (1 row

GETDATE
Returns the current system date and time as a TIMESTAMP value. Syntax
SELECT GETDATE();

Notes • • • • GETDATE is a volatile function that requires parentheses but accepts no arguments. This function uses the date and time supplied by the operating system on the server to which you are connected, which should be the same across all servers. GETDATE internally converts CLOCK_TIMESTAMP() from TIMESTAMPTZ to TIMESTAMP. This function is identical to SYSDATE (page 161)().

-153-

SQL Reference Manual

Example
SELECT GETDATE(); getdate ---------------------------2009-02-18 16:39:58.628483 (1 row)

See Also Date/Time Expressions (page 76)

GETUTCDATE
Returns the current system date and time as a TIMESTAMP value relative to UTC. Syntax
SELECT GETUTCDATE();

Notes • • • GETUTCDATE is a volatile function that requires parentheses but accepts no arguments. This function uses the date and time supplied by the operating system on the server to which you are connected, which should be the same across all servers. GETUTCDATE is internally converted to CLOCK_TIMESTAMP (page 143)() at TIME ZONE 'UTC'.

Example
SELECT GETUTCDATE(); getutcdate ---------------------------2009-02-18 16:39:58.628483 (1 row)

See Also Date/Time Expressions (page 76)

ISFINITE
Tests for the special TIMESTAMP constant INFINITY and returns a value of type BOOLEAN. Syntax
ISFINITE( timestamp )

Parameters
timestamp Is an expression of type TIMESTAMP

-154-

SQL Functions

Examples
SELECT ISFINITE(TIMESTAMP '2009-02-16 21:28:30'); isfinite ---------t (1 row) SELECT ISFINITE(TIMESTAMP 'INFINITY'); isfinite ---------f (1 row)

LAST_DAY
Returns the last day of the month based on a TIMESTAMP. The TIMESTAMP can be supplied as a DATE or a TIMESTAMPTZ data type. Syntax
SELECT LAST_DAY ( date );

Notes The LAST_DAY() function is invariant unless it is called with a TIMESTAMPTZ, in which case it is stable. Examples The following example returns the last day of the month, February, as 29 because 2008 was a leap year:
SELECT LAST_DAY('2008-02-28 23:30 PST') "Last"; Last -----------2008-02-29 (1 row)

The following example returns the last day of the month in March, after converting the string value to the specified DATE type:
SELECT LAST_DAY('2003/03/15') "Last"; Last -----------2003-03-31 (1 row)

The following example returns the last day of February in the specified year (not a leap year):
SELECT LAST_DAY('2003/02/03') "Last"; Last -----------2003-02-28 (1 row)

-155-

the value does not change during the transaction.SQL Reference Manual LOCALTIME Returns a value of type TIME representing the time of day. The intent is to allow a single transaction to have a consistent notion of the "current" time.790771 (1 row) LOCALTIMESTAMP Returns a value of type TIMESTAMP representing today's date and time of day. Notes This function returns the start time of the current transaction. so that multiple modifications within the same transaction bear the same timestamp. time ----------------16:16:06. -156- . Examples SELECT LOCALTIME. Syntax LOCALTIME [ ( precision ) ] Parameters precision Causes the result to be rounded to the specified number of fractional digits in the seconds field. Syntax LOCALTIMESTAMP [ ( precision ) ] Parameters precision Causes the result to be rounded to the specified number of fractional digits in the seconds field.

If date1 is later than date2. Otherwise MONTHS_BETWEEN returns a FLOAT8 result based on a 31-day month. DATE. months_between ----------------1 (1 row) The result from the following example returns an integral number of days because the days fall on the last day of their respective months: -157- . so that multiple modifications within the same transaction bear the same timestamp. If date1 and date2 are either the same days of the month or both are the last days of their respective month. Parameters x . y Takes TIMESTAMP. Syntax SELECT MONTHS_BETWEEN ( x . which considers the difference between date1 and date2.SQL Functions Notes This function returns the start time of the current transaction. y ). the value does not change during the transaction. then the result is always an integer. then the result is negative. Notes MONTHS_BETWEEN() is invariant for TIMESTAMP and DATE but stable for TIMESTAMPTZ. or TIMESTAMPTZ arguments. then the result is positive. Examples SELECT LOCALTIMESTAMP. The intent is to allow a single transaction to have a consistent notion of the "current" time.5951 (1 row) MONTHS_BETWEEN Returns the number of months between date1 and date2 as a FLOAT8. '2009-04-07 15:00'::TIMESTAMP). Examples Note the following result is an integral number of days because the dates are on the same day of the month: SELECT MONTHS_BETWEEN('2009-03-07 16:00'::TIMESTAMP. timestamp -------------------------2009-02-24 14:47:48. If date1 is earlier than date2.

41935483870968 (1 row) The following two examples use the same date1 and date2 strings. '30Sep2000') "Months". Months -----------------1.SQL Reference Manual SELECT MONTHS_BETWEEN('29Feb2000'. '2008-02-29'::timestamp). months_between -------------------1. The result set is the same for both statements: SELECT MONTHS_BETWEEN('2008-04-01'::timestamp. '2008-04-01'::date). and in the example that immediately follows it.'MM-DD-YYYY') ) "Months". months_between -----------------1. TO_DATE ('2003/03/14'.09677419354839 (1 row) SELECT MONTHS_BETWEEN('2008-04-01'::timestamptz.03225806451613 (1 row) SELECT MONTHS_BETWEEN(TO_DATE ('2003/01/01'. months_between -----------------1. 'yyyy/mm/dd') ) "Months". Months -------------------2. MONTHS_BETWEEN() returns the number of months between date1 and date2 as a fraction because the days do not fall on the same day or on the last day of their respective months: SELECT MONTHS_BETWEEN(TO_DATE('02-02-1995'. but they are cast to a different data types (TIMESTAMP and TIMESTAMPTZ). '2008-02-29'::timestamp).09677419354839 (1 row) The following two examples show alternate inputs: SELECT MONTHS_BETWEEN('2008-04-01'::date. Months ----------------7 (1 row) In this example.09677419354839 (1 row) -158- . TO_DATE('01-01-1995'. '2008-02-29'::timestamptz). months_between -----------------1.09677419354839 (1 row) SELECT MONTHS_BETWEEN('2008-02-29'::timestamptz. 'yyyy/mm/dd').'MM-DD-YYYY').

TIME. Syntax NOW() Notes This function returns the start time of the current transaction. interval ) Parameters start end interval Is a DATE. Examples The first command returns true for an overlap in date range of 2007-02-16 – 2007-12-21 with 2007-10-30 – 2008-10-30. The intent is to allow a single transaction to have a consistent notion of the "current" time. or TIME STAMP value that specifies the end of a time period. end ) ( start.11775-05 (1 row) See Also CURRENT_TIMESTAMP (page 144) OVERLAPS Returns true when two time periods overlap. interval ) OVERLAPS ( start. or TIME STAMP value that specifies the beginning of a time period. false when they do not overlap. end ) OVERLAPS ( start. Examples SELECT NOW(). now -----------------------------2009-02-18 16:34:22. the value does not change during the transaction. Syntax ( start. -159- . Is a DATE. TIME. Is a value that specifies the length of the time period. so that multiple modifications within the same transaction bear the same timestamp.SQL Functions NOW Is equivalent to CURRENT_TIMESTAMP (page 144) except that it does not accept a precision parameter. It returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction.

so that multiple modifications within the same statement bear the same timestamp. SELECT (DATE '2007-02-16'. DATE '2007-12-21') OVERLAPS (DATE '2007-10-30'. DATE '2007-12-21') OVERLAPS (DATE '2008-10-30'. Syntax STATEMENT_TIMESTAMP() Notes This function returns the start time of the current statement. DATE '2008-10-30').686493-04 (1 row) See Also CLOCK_TIMESTAMP (page 143) and TRANSACTION_TIMESTAMP (page 165) -160- . overlaps ---------f (1 row) STATEMENT_TIMESTAMP Is similar to TRANSACTION_TIMESTAMP (page 165).SQL Reference Manual SELECT (DATE '2007-02-16'. DATE '2008-10-30'). INTERVAL '1 12:59:10'). Examples SELECT STATEMENT_TIMESTAMP(). It returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current statement. overlaps ---------f (1 row) The next command returns false for an overlap in date range of 2007-02-16. SELECT (DATE '2007-02-16'. 22 hours ago with 2007-10-30. 22 hours ago. statement_timestamp ------------------------------2009-04-07 17:21:17. the value does not change during the statement. INTERVAL '1 12:59:10') OVERLAPS (DATE '2007-10-30'. overlaps ---------t (1 row) The next command returns false for an overlap in date range of 2007-02-16 – 2007-12-21 with 2008-10-30 – 2008-10-30. The intent is to allow a single statement to have a consistent notion of the "current" time.

which should be the same across all servers. sysdate ---------------------------2009-02-18 16:39:58. In implementation. and the end time of the same time slice is '2000-10-28 00:00:03'. the start time of a 3-second time slice interval is '2000-10-28 00:00:00'. This function uses the date and time supplied by the operating system on the server to which you are connected. Syntax SELECT SYSDATE().628483 (1 row) SELECT SYSDATE. Given an input TIMESTAMP value. Notes • • • • SYSDATE is a stable function (called once per statement) that requires no arguments. -161- . Parentheses are optional. Syntax TIME_SLICE(expr. Examples SELECT SYSDATE(). SYSDATE converts CLOCK_TIMESTAMP (page 143) from TIMESTAMPTZ to TIMESTAMP. sysdate ---------------------------2009-03-16 15:37:56. Its data type is the same as the first input argument of expr. such as '2000-10-28 00:00:01'. slice_length.SQL Functions SYSDATE Returns the current system date and time as a TIMESTAMP value. [ time_unit = 'SECOND' ].917191 (1 row) See Also Date/Time Expressions (page 76) TIME_SLICE Aggregates data by different fixed-time intervals and returns a rounded-up input TIMESTAMP value to a value corresponding to the start or end of the time slice interval. This function is identical to GETDATE (page 153). [ start_or_end = 'START' ] ) Return Rounded-up TIMESTAMP value.

Input must be a positive integer. Vertica supports TIMESTAMP for TIME_SLICE instead of DATE and TIME data types. § When slice_length.999 (1 row) -162- . time_unit. MILLISECOND. time_slice ------------------------2004-10-19 00:00:00. MICROSECOND } [Optional] Indicates whether the returned value corresponds to the start or end time of the time slice interval. such as '2004-10-19 10:23:54'. Examples The following example shows the (default) start time of a 3-second time slice: SELECT TIME_SLICE('2004-10-19 00:00:01'. time_slice --------------------2004-10-19 00:00:03 (1 row) This example returns results in milliseconds. Is the length of the slice specified in integers.SQL Reference Manual Parameters expr Is evaluated on each row. Domain of possible values: { HOUR. time_slice --------------------2004-10-19 00:00:00 (1 row) This example shows the end time of a 3-second time slice: SELECT TIME_SLICE('2004-10-19 00:00:01'. 'SECOND'. which can be parsed into a TIMESTAMP value. END } slice_length time_unit start_or_end Notes • • • 'ms' is a synonym of millisecond and 'us' is a synonym of microsecond. MINUTE. and expr is NULL. Domain of possible values: { START. 'ms'). SECOND. The default is START. TIME_SLICE exhibits the following behaviors regarding NULLs: § The system returns an error when any one of slice_length. using a 3-second time slice: SELECT TIME_SLICE('2004-10-19 00:00:01'. [Optional] Is the time unit of the slice with a default of SECOND. or start_or_end parameters is NULL. and start_or_end contain legal values. instead of an error. 'END'). 3). Can be either a column of type TIMESTAMP or a (string) constant. 3. the system returns a NULL value. 3. time_unit. The corresponding SQL data type for TIMESTAMP is TIMESTAMP WITHOUT TIME ZONE.

using a 3-second time slice: SELECT TIME_SLICE('2004-10-19 00:00:01'. time_slice ---------------------------2004-10-19 00:00:00. hours and minutes. such as in the given slice length of 9 in the following example. 9). time_slice --------------------2009-02-14 20:12:54 (1 row) This is expected behavior. in which the end value of a time slice does not belong to that time slice.999999 (1 row) This example uses a 3-second interval with an input value of '00:00:01'. the slice will not always start or end on 00 seconds: SELECT TIME_SLICE('2009-02-14 20:13:01'. as the following properties are true for all time slices: • • • Equal in length Consecutive (no gaps between them) Non-overlapping -163- .SQL Functions This example returns results in microseconds. When the time slice interval is not a factor of 60 seconds. 3. '00:00:03' is also the start of the second 3-second time slice because of time slice boundaries. Give the input of '00:00:01': • • • '00:00:00' is the start of the 3-second time slice '00:00:03' is the end of the 3-second time slice. though all values are implied as being part of the timestamp. 'us'). To focus specifically on seconds. the example omits date. it starts the next one.

FIRST_VALUE(sales_dollar_amount) OVER (PARTITION BY TIME_SLICE(DATE '2000-01-01' + date_key + transaction_time. for example. you could use a different slice length. the first/last price value within each time slice group. adjust the output timestamp values so that the remainder of 54 counts up to 60: SELECT TIME_SLICE('2009-02-14 20:13:01'. transaction_time. date_key | transaction_time | sales_dollar_amount | time_slice | first_value ----------+------------------+---------------------+---------------------+------------1 | 00:41:16 | 164 | 2000-01-02 00:41:15 | 164 1 | 00:41:33 | 310 | 2000-01-02 00:41:33 | 310 1 | 15:32:51 | 271 | 2000-01-02 15:32:51 | 271 1 | 15:33:15 | 419 | 2000-01-02 15:33:15 | 419 1 | 15:33:44 | 193 | 2000-01-02 15:33:42 | 193 1 | 16:36:29 | 466 | 2000-01-02 16:36:27 | 466 1 | 16:36:44 | 250 | 2000-01-02 16:36:42 | 250 2 | 03:11:28 | 39 | 2000-01-03 03:11:27 | 39 3 | 03:55:15 | 375 | 2000-01-04 03:55:15 | 375 3 | 11:58:05 | 369 | 2000-01-04 11:58:03 | 369 3 | 11:58:24 | 174 | 2000-01-04 11:58:24 | 174 3 | 11:58:52 | 449 | 2000-01-04 11:58:51 | 449 3 | 19:01:21 | 201 | 2000-01-04 19:01:21 | 201 3 | 22:15:05 | 156 | 2000-01-04 22:15:03 | 156 4 | 13:36:57 | -125 | 2000-01-05 13:36:57 | -125 4 | 13:37:24 | -251 | 2000-01-05 13:37:24 | -251 4 | 13:37:54 | 353 | 2000-01-05 13:37:54 | 353 4 | 13:38:04 | 426 | 2000-01-05 13:38:03 | 426 4 | 13:38:31 | 209 | 2000-01-05 13:38:30 | 209 5 | 10:21:24 | 488 | 2000-01-06 10:21:24 | 488 (20 rows) Notice how TIME_SLICE rounds the transaction time down to the 3-second slice length. This could be useful if you want to sample input data by choosing one row from each time slice group. time --------------------2009-02-14 20:13:00 (1 row) Alternatively. 3). 3) ORDER BY DATE '2000-01-01' + date_key + transaction_time) AS first_value FROM store.store_sales_fact LIMIT 20.SQL Reference Manual To force the above example containing '2009-02-14 20:13:01' to start at '2009-02-14 20:13:00'. SELECT date_key. such as 5: SELECT TIME_SLICE('2009-02-14 20:13:01'. sales_dollar_amount. See Also FIRST_VALUE/LAST_VALUE (page 128) -164- . 9 )+'6 seconds'::INTERVAL AS time. TIME_SLICE(DATE '2000-01-01' + date_key + transaction_time. time_slice --------------------2009-02-14 20:13:00 (1 row) Within the TIME_SLICE group (set of rows belonging to the same time slice) you can use analytic functions such as FIRST_VALUE/LAST_VALUE (page 128) to find. 5). divisible by 60.

the value does not change during the transaction. It returns a value of type TIMESTAMP WITH TIME ZONE representing the start of the current transaction. timeofday ------------------------------------Tue Apr 07 17:22:01. Examples SELECT TIMEOFDAY(). The intent is to allow a single transaction to have a consistent notion of the "current" time. so that multiple modifications within the same transaction bear the same timestamp. Syntax TRANSACTION_TIMESTAMP() Notes This function returns the start time of the current transaction. Examples SELECT TRANSACTION_TIMESTAMP().SQL Functions TIMEOFDAY Returns a text string representing the time of day. Syntax TIMEOFDAY() Notes TIMEOFDAY() returns the wall-clock time and advances during transactions.190445 2009 EDT (1 row) TRANSACTION_TIMESTAMP Is equivalent to CURRENT_TIMESTAMP (page 144) except that it does not accept a precision parameter. transaction_timestamp -----------------------------2009-02-18 16:34:22.11775-05 (1 row) See Also CLOCK_TIMESTAMP (page 143) and STATEMENT_TIMESTAMP (page 160) -165- .

For example. Notes • TO_CHAR(any) casts any type. DOUBLE PRECISION) specifies the value to convert. but the single Y in Year is not. For example: '\\"YYYY Month\\"' • • • • -166- . INTEGER. INTERVAL. The second argument is a template that defines the output or input format. ERROR: cannot cast type binary varying to character varying Ordinary text is allowed in TO_CHAR templates and is output literally. for example. TO_CHAR Converts various date/time and numeric values into text strings. TO_CHAR formats HH and HH12 as hours in a single day. INTEGER. Syntax TO_CHAR ( expression [. This is necessary because the backslash already has a special meaning in a string constant. Exception: The TO_TIMESTAMP function can take a single double precision argument. You can put a substring in double quotes to force it to be interpreted as literal text even if it contains pattern key words. >24. [Optional] (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting (page 171) and and/or Template Patterns for Numeric Formatting (page 174). FLOATING POINT) to formatted strings and for converting from formatted strings to specific data types. To use a double quote character in the output. the YYYY is replaced by the year data. in '"Hello Year "YYYY'. The following example returns the error you receive if you attempt to cast TO_CHAR to a binary data type: SELECT TO_CHAR('abc'::VARBINARY). precede it with a double backslash. while HH24 can output hours exceeding a single day. The TO_CHAR function's day-of-the-week numbering (see the 'D' template pattern (page 171)) is different from that of the EXTRACT (page 153) function. pattern ] ) Parameters expression pattern (TIMESTAMP. except BINARY/VARBINARY.166 Formatting Functions The formatting functions in this section provide a powerful tool set for converting various data types (DATE/TIME. to VARCHAR. Given an INTERVAL type. These functions all follow a common calling convention: • • The first argument is the value to be formatted.

SELECT TO_CHAR(-0. SELECT TO_CHAR(485. 'FM999. SELECT TO_CHAR(12. '99. SELECT TO_CHAR(485. FMDD HH12:MI:SS'). SELECT TO_CHAR(CURRENT_TIMESTAMP. '999.5' '148.10' 05:39:18' '-.500' ' 3 148. 'FM9990999. SELECT TO_CHAR(-485. 'SG999'). 'FM9. SELECT TO_CHAR(485.99').1. SELECT TO_CHAR(-485. '9990999. '9G999D999'). '999D999'). SELECT TO_CHAR(485.5. SELECT TO_CHAR(-485.99'). DD HH12:MI:SS'). '999MI').0' '0012.485' ' 1 485' ' 148.999'). 'Day.9').9V99 is not allowed.5.1' ' 0012.999'). '999'). 'FMDay. SELECT TO_CHAR(-485. 6 ' -.990'). '999MI'). 'FM999. '9G999').500' '148.500' '485-' '485-' '485 ' '485' '+485' '+485' '-485' '4-85' '<485>' 'DM 485 ' CDLXXXV' 'CDLXXXV' -167- .1.5. SELECT TO_CHAR(485. SELECT TO_CHAR(485.1. 'PL999'). SELECT TO_CHAR(1485. 'FMRN'). 'SG999'). SELECT TO_CHAR(485. 'RN'). SELECT TO_CHAR(3148. SELECT TO_CHAR(485. Result 'Tuesday . 'L999'). 06 05:39:18' 'Tuesday.SQL Functions • TO_CHAR does not support the use of V combined with a decimal point. '9 9 9').500' ' 148. SELECT TO_CHAR(148.5. SELECT TO_CHAR(148. SELECT TO_CHAR(12. 'FM999MI'). SELECT TO_CHAR(1485. For example: 99. SELECT TO_CHAR(0.1' ' 0.999'). '0. SELECT TO_CHAR(-485. '999S').5. '999'). '999PR'). '9. SELECT TO_CHAR(485. Examples Expression SELECT TO_CHAR(CURRENT_TIMESTAMP.9'). SELECT TO_CHAR(-0. SELECT TO_CHAR(148.' ' 485' '-485' ' 4 8 5' ' 1. SELECT TO_CHAR(-485. '9SG99'). SELECT TO_CHAR(148.9').

'99V999').4.333 TO_DATE Converts a string value to a DATE type. pattern ) Parameters expression pattern (CHAR or VARCHAR) specifies the value to convert (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting (page 171) and/or Template Patterns for Numeric Formatting (page 174). because TO_TIMESTAMP expects one space only. 'V' ' 482nd' 'Good number: 485' 'Pre: 485 Post: . SELECT TO_CHAR(12.567 1999-12-25 1999-12-25 11:31:00 1999-12-25 11:31:00-05 3 days 00:16:40. SELECT TO_CHAR(482. 'YYYY MON') is correct.SQL Reference Manual SELECT TO_CHAR(5.999'). '99V999').333 secs'::INTERVAL). SELECT TO_CHAR(12. SELECT TO_CHAR(485.2. SELECT TO_CHAR('1999-12-25 11:31'::TIMESTAMP). § TO_TIMESTAMP('2000 JUN'. FX must be specified as the first item in the template.8. 'FMRN'). SELECT TO_CHAR('1999-12-25'::DATE). SELECT TO_CHAR('3 days 1000. SELECT TO_CHAR(12. This is necessary because the backslash already has a special meaning in a string constant. SELECT TO_CHAR(485. SELECT TO_CHAR('1999-12-25 11:31 EST'::TIMESTAMPTZ). '"Pre:"999" Post:" . precede it with a double backslash. '99V9'). For example: '\\"YYYY Month\\"' TO_TIMESTAMP and TO_DATE skip multiple blank spaces in the input string if the FX option is not used.45. 'FXYYYY MON') returns an error. '999th'). SELECT TO_CHAR(-1234. For example: § For example TO_TIMESTAMP('2000 JUN'.800' ' 12000' ' 12400' ' 125' -1234. -168- . Notes • • To use a double quote character in the output. '"Good number:"999'). Syntax TO_DATE ( expression .567).

'YYYYMMDD') is interpreted as a four-digit year Instead. You must use a non-digit character or template after YYYY. to_date -----------2000-02-13 (1 row) See Also Template Pattern Modifiers for Date/Time Formatting (page 174) TO_TIMESTAMP Converts a string value or a UNIX/POSIX epoch value to a TIMESTAMP WITH TIME ZONE type. the CC field is ignored if there is a YYY. YYYY or Y. If CC is used with YY or Y then the year is computed as (CC-1)*100+YY. In conversions from string to TIMESTAMP or DATE.YYY field. -169- . see Wikipedia http://en. otherwise the year is always interpreted as four digits. INTEGER values are implicitly cast to DOUBLE PRECISION. For example (with the year 20000): TO_DATE('200001131'. not counting leap seconds.SQL Functions • The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year with more than four digits.org/wiki/Unix_time. 'YYYYMonDD'). such as TO_DATE('20000-1131'. Examples • • SELECT TO_DATE('13 Feb 2000'. Syntax TO_TIMESTAMP ( expression. 'DD Mon YYYY').wikipedia. pattern ) TO_TIMESTAMP ( unix-epoch ) Parameters expression pattern (CHAR or VARCHAR) is the string to convert (CHAR or VARCHAR) specifies an output pattern string using the Template Patterns for Date/Time Formatting (page 171) and/or Template Patterns for Numeric Formatting (page 174). 'YYYY-MMDD') or TO_DATE('20000Nov31'. 1970. unix-epoch Notes • For more information about UNIX/POSIX time. use a non-digit separator after the year. (DOUBLE PRECISION) specifies some number of seconds elapsed since midnight UTC of January 1.

For example: § For example TO_TIMESTAMP('2000 JUN'. otherwise the year is always interpreted as four digits.MS. the CC field is ignored if there is a YYY. such as TO_DATE('20000-1131'. For example (with the year 20000): TO_DATE('200001131'. 'SS:MS') is not 3 milliseconds.YYY field. precede it with a double backslash. If CC is used with YY or Y then the year is computed as (CC-1)*100+YY. 'HH:MI:SS.US') is 15 hours. 'YYYYMMDD') is interpreted as a four-digit year Instead. because TO_TIMESTAMP expects one space only. 12 minutes. In conversions from string to TIMESTAMP or DATE.SQL Reference Manual • • • • Millisecond (MS) and microsecond (US) values in a conversion from string to TIMESTAMP are used as part of the seconds after the decimal point. and 2 seconds + 20 milliseconds + 1230 microseconds = 2. to_timestamp -----------------------2009-02-13 00:00:00-05 (1 row) SELECT TO_TIMESTAMP(200120400). 'YYYYMonDD'). 'YYYY MON') is correct. Here is a more complex example: TO_TIMESTAMP('15:12:02. because the conversion counts it as 12 + 0. 12:30. the input values 12:3. Examples • • SELECT TO_TIMESTAMP('13 Feb 2009'.001230'. to_timestamp -----------------------1976-05-05 01:00:00-04 (1 row) See Also Template Pattern Modifiers for Date/Time Formatting (page 174) TO_NUMBER Converts a string value to DOUBLE PRECISION.003 = 12. but 300. use a non-digit separator after the year. For example: '\\"YYYY Month\\"' TO_TIMESTAMP and TO_DATE skip multiple blank spaces in the input string if the FX option is not used.021230 seconds. YYYY or Y. 'FXYYYY MON') returns an error. which the conversion counts as 12 + 0.003 seconds. To get three milliseconds. The YYYY conversion from string to TIMESTAMP or DATE has a restriction if you use a year with more than four digits.020. To use a double quote character in the output. § TO_TIMESTAMP('2000 JUN'. You must use a non-digit character or template after YYYY. For example TO_TIMESTAMP('12:3'.3 seconds. FX must be specified as the first item in the template. 'DD Mon YYYY'). use 12:003. This means for the format SS:MS. -170- . 'YYYY-MMDD') or TO_DATE('20000Nov31'. This is necessary because the backslash already has a special meaning in a string constant. and 12:300 specify the same number of milliseconds.

'rn'). 'rn'). Pattern HH Description Hour of day (01-12) -171- . For example. For example: '\\"YYYY Month\\"' Examples SELECT TO_CHAR(2009. precede it with a double backslash. TO_NUMBER('mmix'. This is necessary because the backslash already has a special meaning in a string constant. Any text that is not a template pattern is simply copied verbatim.log files. Notes To use a double quote character in the output.SQL Functions Syntax TO_NUMBER ( expression. Similarly. SELECT TO_NUMBER('-123. Note: Vertica uses the ISO 8601:2004 style for date/time fields in Vertica *. If omitted. function returns a floating point. to_char | to_number -----------------+----------mmix | 2009 (1 row) It the pattern parameter is omitted. the function returns a floating point.123 TM Moveout:0x2aaaac002180 [Txn] <INFO> Certain modifiers can be applied to any template pattern to alter its behavior as described in Template Pattern Modifiers for Date/Time Formatting (page 174). 2008-09-16 14:40:59. template patterns identify the parts of the input data string to be looked at and the values to be found there.3456 Template Patterns for Date/Time Formatting In an output template string (for TO_CHAR). in an input template string (for anything other than TO_CHAR). there are certain patterns that are recognized and replaced with appropriately-formatted data from the value to be formatted. (CHAR or VARCHAR) Optional parameter specifies an output pattern string using the Template Patterns for Date/Time Formatting (page 171) and/or Template Patterns for Numeric Formatting (page 174). to_number -----------12. [ pattern ] ) Parameters expression pattern (CHAR or VARCHAR) specifies the string to convert.456e-01').

Y.c.YYY YYYY YYY YY Y IYYY IYY IY I BC or B.m. or ad or a. or pm or p.SQL Reference Manual HH12 HH24 MI SS MS US SSSS AM or A.d.D. or PM or P. am or a.M.C.m. bc or b. or AD or A.M. MONTH Month month MON Hour of day (01-12) Hour of day (00-23) Minute (00-59) Second (00-59) Millisecond (000-999) Microsecond (000000-999999) Seconds past midnight (0-86399) Meridian indicator (uppercase) Meridian indicator (lowercase) Year (4 and more digits) with comma Year (4 and more digits) Last 3 digits of year Last 2 digits of year Last digit of year ISO year (4 and more digits) Last 3 digits of ISO year Last 2 digits of ISO year Last digits of ISO year Era indicator (uppercase) Era indicator (lowercase) Full uppercase month name (blank-padded to 9 chars) Full mixed-case month name (blank-padded to 9 chars) Full lowercase month name (blank-padded to 9 chars) Abbreviated uppercase month name (3 chars) -172- .

) ISO week number of year (The first Thursday of the new year is in week 1. I=January) (uppercase) Month in Roman numerals (i-xii.) Week number of year (1-53) (The first week starts on the first day of the year.) Century (2 digits) Julian Day (days since January 1.SQL Functions Mon mon MM DAY Day day DY Dy dy DDD DD D W Abbreviated mixed-case month name (3 chars) Abbreviated lowercase month name (3 chars) Month number (01-12) Full uppercase day name (blank-padded to 9 chars) Full mixed-case day name (blank-padded to 9 chars) full lowercase day name (blank-padded to 9 chars) Abbreviated uppercase day name (3 chars) Abbreviated mixed-case day name (3 chars) Abbreviated lowercase day name (3 chars) Day of year (001-366) Day of month (01-31) Day of week (1-7. 4712 BC) Quarter Month in Roman numerals (I-XII. Sunday is 1) Week of month (1-5) (The first week starts on the first day of the month. i=January) (lowercase) Time-zone name (uppercase) Time-zone name (lowercase) WW IW CC J Q RM rm TZ tz -173- .

SQL Reference Manual Template Pattern Modifiers for Date/Time Formatting Certain modifiers can be applied to any template pattern to alter its behavior. Template Patterns for Numeric Formatting Pattern 9 0 . JD. FMMonth is the Month pattern with the FM modifier. Modifier AM AT JULIAN. For example: TMMonth ON PM T TH suffix th suffix TM prefix Notes FM suppresses leading zeroes and trailing blanks that would otherwise be added to make the output of a pattern be fixed width. (comma) Group (thousand) separator -174- . J FM prefix FX prefix Description Time is before 12:00 Ignored Next field is Julian Day fill mode (suppress padding blanks and zeroes) For example: FMMonth Fixed format global option (see usage notes) For example: FX Month DD Day Ignored Time is on or after 12:00 Next field is time Uppercase ordinal number suffix For example: DDTH Lowercase ordinal number suffix For example: DDth Translation mode (print localized day and month names based on lc_messages). (period) Description Value with the specified number of digits Value with leading zeros Decimal point . For example.

TO_CHAR does not support the use of V combined with a decimal point.9V99 is not allowed. TH does not convert values less than zero and does not convert fractional numbers.SQL Functions PR S L D G MI PL SG RN TH or th V Negative value in angle brackets Sign anchored to number (uses locale) Currency symbol (uses locale) Decimal point (uses locale) Group separator (uses locale) Minus sign in specified position (if number < 0) Plus sign in specified position (if number > 0) Plus/minus sign in specified position Roman numeral (input between 1 and 3999) Ordinal number suffix Shift specified number of digits (see notes) EEEE Scientific notation (not implemented yet) Usage • A sign formatted using SG. 'S9999') produces ' -12' § TO_CHAR(-12. • • • -175- . PL. 'MI9999') produces '. V effectively multiplies the input values by 10^n. for example: § TO_CHAR(-12.12' 9 results in a value with the same number of digits as there are 9s. For example: 99. where n is the number of digits following V. or MI is not anchored to the number. If a digit is not available it outputs a space.

abs -----28. The functions working with DOUBLE PRECISION (page 105) data could vary in accuracy and behavior in boundary cases depending on the host system. Except where noted. See Also Template Patterns for Numeric Formatting (page 174) ABS Returns the absolute value of the argument.SQL Reference Manual Mathematical Functions Some of these functions are provided in multiple forms with different argument types. acos -----0 (1 row) -176- . any given form of a function returns the same data type as its argument.. The return value has the same data type as the argument.7 (1 row) ACOS Returns a DOUBLE PRECISION value representing the trigonometric inverse cosine of the argument. Syntax ABS ( expression ) Parameters expression Is a value of type INTEGER or DOUBLE PRECISION Examples SELECT ABS(-28.7). Syntax ACOS ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT ACOS (1).

785398163397448 (1 row) ATAN2 Returns a DOUBLE PRECISION value representing the trigonometric inverse tangent of the arithmetic dividend of the arguments. Syntax ATAN ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT ATAN(1).5707963267949 (1 row) ATAN Returns a DOUBLE PRECISION value representing the trigonometric inverse tangent of the argument. Syntax ASIN ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT ASIN(1).SQL Functions ASIN Returns a DOUBLE PRECISION value representing the trigonometric inverse sine of the argument. asin ----------------1. atan ------------------0. divisor ) Parameters quotient Is an expression of type DOUBLE PRECISION representing the -177- . Syntax ATAN2 ( quotient.

atan2 -----------------1. cbrt -----3 (1 row) CEILING (CEIL) Returns the smallest floating point not less than then argument. Syntax CBRT ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Examples SELECT CBRT(27. -178- .0). The return value has the type DOUBLE PRECISION.8).10714871779409 (1 row) CBRT Returns the cube root of the argument.SQL Reference Manual quotient divisor Is an expression of type DOUBLE PRECISION representing the divisor Example SELECT ATAN2(2.1). Syntax CEILING ( expression ) CEIL ( expression ) Parameters expression Is a value of type INTEGER or DOUBLE PRECISION Examples SELECT CEIL(-42. ceil ------42 (1 row) COS Returns a DOUBLE PRECISION value representing the trigonometric cosine of the argument.

54030230586814 (1 row) COT Returns a DOUBLE PRECISION value representing the trigonometric cotangent of the argument. The return value has the type DOUBLE PRECISION. cot ------------------0.642092615934331 (1 row) DEGREES Converts an expression from radians to degrees.SQL Functions Syntax COS ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT COS(-1). Syntax COT ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Example SELECT COT(1). Syntax DEGREES ( expression ) Parameters expression Is a value of type DOUBLE PRECISION Examples SELECT DEGREES(0. cos -----------------0.5). degrees -179- .

the number on the left is 2^49 as an INTEGER. ?column? | floor ------------------+---------------------1125899906842624 | 1. The return value has the same data type as the argument.0).SQL Reference Manual -----------------28. Examples Although the following example looks like an INTEGER.71828182845905 (1 row) FLOOR Returns a floating point value representing the largest INTEGER not greater than the argument. but the number on the right is a FLOAT: SELECT 1<<49. Syntax FLOOR ( expression ) Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION.12589990684262e+15 -180- . exp -----------------2. Syntax EXP ( exponent ) Parameters exponent Is an expression of type INTEGER or DOUBLE PRECISION Example SELECT EXP(1. e to the power of a number. ?column? | floor -----------------+----------------562949953421312 | 562949953421312 (1 row) Compare the above example to: SELECT 1<<50. FLOOR(1 << 50). FLOOR(1 << 49).6478897565412 (1 row) EXP Returns the exponential function.

Col2).SQL Functions (1 row) HASH Calculates a hash value over its arguments.. If your data is fairly regular and you want more even distribution than you get with HASH. usually column names. hash --------------------4157497907121511878 1799398249227328285 3250220637492749639 (3 rows) See Also MODULARHASH (page 183) LN Returns the natural logarithm of the argument. HASH (Col1. each expression is a column reference (see "Column References" on page 74). For the purpose of hash segmentation. Notes • • The HASH() function is used to provide projection segmentation over a set of nodes in a cluster and takes up to 32 arguments. Examples SELECT HASH(product_price. and selects a specific node for each row based on the values of the columns for that row.. producing a value in the range 0 <= x < 263 (two to the sixty-third power or 2^63). Syntax HASH ( expression [ . consider using MODULARHASH (page 183)() for project segmentation. The return data type is the same as the argument. ] ) Parameters expression Is an expression of any data type.. product_cost) FROM product_dimension WHERE product_price = '11'. -181- .

693147180559945 (1 row) LOG Returns the logarithm to the specified base of the argument. ] expression ) Parameters base expression Specifies the base (default is base 10) Is an expression of type INTEGER or DOUBLE PRECISION Examples SELECT LOG(2. ln ------------------0.SQL Reference Manual Syntax LN ( expression ) Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION Examples SELECT LN(2). log ----6 (1 row) SELECT LOG(100). log ----2 (1 row) MOD MOD (modulo) returns the remainder of a division operation. 64). Syntax LOG ( [ base. expression2 ) -182- . Syntax MOD ( expression1. The return data type is the same as the argument.0. The return data type is the same as the arguments.

dwdate FROM fact) SEGMENTED BY MODULARHASH(dwdate) ALL NODES OFFSET 2.. Examples CREATE PROJECTION fact_ts_2 (f_price. which distributes data using a normal statistical distribution.. In all other uses. f_tid. cost. For example: 6/2 = 3 The dividend is 6. cid. f_date) AS (SELECT price. Notes The MODULARHASH() function takes up to 32 arguments. and selects a specific node for each row based on the values of the columns for that row.SQL Functions Parameters expression1 expression2 Specifies the dividend (INTEGER or DOUBLE PRECISION) Specifies the divisor (type same as dividend) Notes The dividend is the quantity to be divided. If you can hash segment your data using a column with a regular pattern. tid. usually column names. Syntax MODULARHASH ( expression [ . See Also HASH (page 181) -183- . Examples SELECT MOD(9. f_cost. f_cid. The divisor is 2. returns 0. MODULARHASH distributes the data more evenly than HASH. ] ) Parameters expression Is a column reference (see "Column References" on page 74) of any data type. mod ----1 (1 row) MODULARHASH Calculates a hash value over its arguments for the purpose of projection segmentation.4). such as a sequential unique identifier..

0). Syntax POWER ( expression1. Syntax RADIANS ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION representing -184- . 3.0.SQL Reference Manual PI Returns the constant pi (Π). expression2 ) Parameters expression1 expression2 Is an expression of type DOUBLE PRECISION that represents the base Is an expression of type DOUBLE PRECISION that represents the exponent Examples SELECT POWER(9.14159265358979 (1 row) POWER Returns a DOUBLE PRECISION value representing one number raised to the power of another number. power ------729 (1 row) RADIANS Returns a DOUBLE PRECISION value representing an angle expressed in degrees converted to radians. pi -----------------3. Syntax PI() Examples SELECT PI(). the ratio of any circle's circumference to its diameter in Euclidean geometry The return type is DOUBLE PRECISION.

however. random ------------------0. which is >= 0 and < 1. Syntax RANDOMINT(N) -185- .0: SELECT RANDOM(). Examples In the following example. That is. Its result is a FLOAT8 data type (also called DOUBLE PRECISION (page 105)). RANDOMINT(N) returns one of the N integers from 0 through N-1. where 0 <= I < N. Results depending on RANDOM are not reproducible because the work might be divided differently across nodes. the result is a float.SQL Functions degrees Examples SELECT RADIANS(45). distributes SQL processing over a cluster of nodes.785398163397448 (1 row) RANDOM Returns a uniformly-distributed random number x. where 0 <= x < 1. where N <= MAX_INT8. Syntax RANDOM() Parameters RANDOM has no arguments. Vertica. where each node generates its own independent random sequence. radians ------------------0.211625560652465 (1 row) RANDOMINT Returns a uniformly-distributed integer I. Vertica automatically generates truly random seeds for each node each time a request is executed and does not provide a mechanism for forcing a specific seed. Therefore. which is set to generate a reproducible pseudo-random sequence. Notes Typical pseudo-random generators accept a seed.

the result is an INT8. SELECT RANDOMINT(5). decimal-places ] ) Parameters expression decimal-places Is an expression of type DOUBLE PRECISION If positive. 3).4999999999999999).14159.4}. which is rounded up.SQL Reference Manual Example In the following example. randomint ---------3 (1 row) ROUND Rounds off a value to a specified number of decimal places. which is >= 0 and < N. Syntax ROUND ( expression [ .3. round ------- -186- .15 decimal places round ------3 (1 row) The internal integer representation used to compute the ROUND function causes the fraction to be evaluated precisely and it is thus rounded down.499999999999999). Fractions greater than or equal to .5 are rounded down (truncated).1.16 decimal places round ------4 (1 row) The internal floating point representation used to compute the ROUND function causes the fraction to be evaluated as 3.5. if negative. Fractions less than . However: SELECT ROUND(3. Notes The ROUND function rounds off except in the case of a decimal constant with more than 15 decimal places. In this case. specifies the number of decimal places to display to the left of the decimal point.5 are rounded up. -. INT8 is randomly chosen from the set {0. Examples SELECT ROUND(3.2. -. specifies the number of decimal places to display to the right of the decimal point. For example: SELECT ROUND(3.

Syntax SIGN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Examples SELECT SIGN(-8.499999616987256 -187- . sin ------------------0.4). or 1 representing the arithmetic sign of the argument. Syntax SIN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Example SELECT SIN(30 * 2 * 3.142 (1 row) SELECT ROUND(1234567. sign ------1 (1 row) SIN Returns a DOUBLE PRECISION value representing the trigonometric sine of the argument.SQL Functions 3.14159 / 360). -1). round ------0 (1 row) SIGN Returns a DOUBLE PRECISION value of -1. -3).4999. round --------1235000 (1 row) SELECT ROUND(3. 0.

tan -------------------6. sqrt ----------------1. Syntax TRUNC ( expression [ . places ] Parameters expression Is an expression of type INTEGER or DOUBLE PRECISION that -188- .4142135623731 (1 row) TAN Returns a DOUBLE PRECISION value representing the trigonometric tangent of the argument.SQL Reference Manual (1 row) SQRT Returns a DOUBLE PRECISION value representing the arithmetic square root of the argument. Syntax TAN ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Example SELECT TAN(30).40533119664628 (1 row) TRUNC Returns a value representing the argument fully truncated (toward zero) or truncated to a specific number of decimal places. Syntax SQRT ( expression ) Parameters expression Is an expression of type DOUBLE PRECISION Examples SELECT SQRT(2).

indicating the number of buckets. Must also evaluate to a numeric or datetime value and cannot evaluate to null.43 (1 row) WIDTH_BUCKET Lets you construct equiwidth histograms. values below the low bucket return 0. 2). trunc ------42 (1 row) SELECT TRUNC(42.8). hist_min hist_max bucket_count -189- .4382. Must also evaluate to numeric or datetime values and cannot evaluate to null.SQL Functions represents the number to truncate places Is an expression of type INTEGER specifies the number of decimal places to return Examples SELECT TRUNC(42. This expression must evaluate to a numeric or datetime value or to a value that can be implicitly converted to a numeric or datetime value. trunc ------42. This expression always evaluates to a positive INTEGER. then the expression returns null. Is an expression that resolves to the high boundary of bucket bucket_count. Returns an integer value. Is an expression that resolves to the low boundary of bucket 1. If expr evaluates to null. Syntax WIDTH_BUCKET( expr. in which the histogram range is divided into intervals (buckets) of identical sizes. In addition. Is an expression that resolves to a constant. bucket_count ) Parameters expr Is the expression for which the histogram is created. hist_min. and values above the high bucket return bucket_count +1. hist_max.

WIDTH_BUCKET accepts the following data types: (float and/or int). are actually 0-19. This is known as an equiwidth histogram. 60-80.SQL Reference Manual Notes • • • WIDTH_BUCKET divides a data set into buckets of equal width. 20-40. Nielson | 861066 | 8 (14 rows) -190- .' AND customer_gender='Female' AND household_id < '1000' ORDER BY "Income". 1 [100-300). In the following result set. 1000000. 100000. 3 [500-700). bucket 9 is empty. Roy | 476055 | 4 Midori B. 20-40. Brown | 687810 | 6 Julie D. Nguyen | 12283 | 0 Amy I. Weaver | 896260 | 8 Jessica C. so that age ranges of 0-20.283 is less than 100. or (interval and/or time). and so on. and there is no overflow. 100. Young | 462587 | 4 Martha T. Anything less than the given value of hist_min goes in bucket 0. Reyes | 323213 | 3 Rebecca V. so it goes into the underflow bucket.000. Brown | 240872 | 2 Kim U. 3). When using WIDTH_BUCKET pay attention to the minimum and maximum boundary values. Each bucket contains values equal to or greater than the base value of that bucket. Nielson | 894910 | 8 Sarah B. In this example. 9) AS "Income" FROM public. customer_name | annual_income | Income --------------------+---------------+-------Joanna A. divided into eleven buckets. The value 12. and anything greater than the given value of hist_max goes in the bucket bucket_count+1. Age = 0-20.999. (timestamp and/or date and/or timestamptz). Nguyen | 109806 | 1 Juanita L. Martin | 324493 | 3 Betty . annual_income. Overstreet | 284011 | 2 Tiffany N. Miller | 616509 | 6 Julie Y. they would be assigned to an overflow bucket. WIDTH_BUCKET(product_cost.99 and 20-39. WIDTH_BUCKET (annual_income. The results return the bucket number to an “Income” column. 700. product_cost. For example.customer_dimension WHERE customer_state='CT' AND title='Dr. 2 [300-500). the reason there is a bucket 0 is because buckets are numbered from 1 to bucket_count. Examples The following example returns five possible values and has three buckets: 0 [Up to 100). Note that if customers had an annual incomes greater than the maximum value. Taylor | 219002 | 2 Carla E. The following example creates a nine-bucket histogram on the annual_income column for customers in Connecticut who are female doctors. and 4 [700 and up): SELECT product_description. 10: SELECT customer_name. 40-60. including an underflow and an overflow.

. COALESCE(lowest_competitor_price. for n >= 3.. .SQL Functions NULL-handling Functions NULL-handling functions take arguments of any type. highest_competitor_price. expr2 ). Syntax SELECT COALESCE ( expr1. expr2) is equivalent to the following CASE expression: CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE expr2 END.. Example SELECT product_description. . COALESCE (expr1. expr2. is equivalent to the following CASE expression: CASE WHEN expr1 IS NOT NULL THEN expr1 ELSE COALESCE (expr2. exprn) END. then the COALESCE function returns null. average_competitor_price) AS price FROM product_dimension. . and their return type is based on their argument types. . exprn ). COALESCE Returns the value of the first non-null expression in the list. SELECT COALESCE ( expr1. expr2. Parameters • • COALESCE (expr1. Notes COALESCE is an ANSI standard function (SQL92). exprn). If all expressions evaluate to null. product_description | price ------------------------------------+------Brand #54109 kidney beans | 264 Brand #53364 veal | 139 Brand #50720 ice cream sandwhiches | 127 Brand #48820 coffee cake | 174 Brand #48151 halibut | 353 Brand #47165 canned olives | 250 Brand #39509 lamb | 306 Brand #36228 tuna | 245 Brand #34156 blueberry muffins | 183 Brand #31207 clams | 163 (10 rows) See Also Case Expressions (page 73) ISNULL (page 192) -191- .. ..

Parameters • • If expr1 is null. The following statement returns the value 60: SELECT ISNULL(60.SQL Reference Manual ISNULL Returns the value of the first non-null expression in the list. 0.3. ISNULL(product_cost. then ISNULL returns expr1. The following statement returns the value 140: SELECT ISNULL(NULL.” Implementation is equivalent to the CASE expression. Examples SELECT product_description.0) AS cost FROM product_dimension. product_price. 140) FROM employee_dimension. For example: CASE WHEN expr1 IS NULL THEN expr2 ELSE expr1 END. “Set operation result data types. product_description | product_price | cost --------------------------------+---------------+-----Brand #59957 wheat bread | 405 | 207 Brand #59052 blueberry muffins | 211 | 140 Brand #59004 english muffins | 399 | 240 Brand #53222 wheat bread | 323 | 94 Brand #52951 croissants | 367 | 121 Brand #50658 croissants | 100 | 94 Brand #49398 white bread | 318 | 25 Brand #46099 wheat bread | 242 | 3 Brand #45283 wheat bread | 111 | 105 Brand #43503 jelly donuts | 259 | 19 (10 rows) See Also Case Expressions (page 73) COALESCE (page 191) -192- . Syntax SELECT ISNULL ( expr1 . ISNULL is an alias of NVL (page 193). expr2 ). then ISNULL returns expr2. The arguments can have any data type consistent with SQL92 clause 9. ISNULL(a. 90) FROM employee_dimension. ISNULL is equivalent to COALESCE except that ISNULL is called with only two arguments. more general function. If expr1 is not null.b) is different from x IS NULL. Notes • • • • • • • COALESCE (page 191) is the more standard.

If the expressions are not equal. The result has the same type as expr1. Syntax SELECT NVL( expr1 . Issue a select statement: SELECT x. Parameters • If expr1 is null. '2009-09-04 09:14:00 EDT') FROM t. If the expressions are equal. Creates a single-column table t: CREATE TABLE t (x TIMESTAMPTZ). the function returns expr1. x | nullif ------------------------+-----------------------2009-09-04 09:14:00-04 | 2010-09-04 09:14:00-04 | 2010-09-04 09:14:00-04 NVL Returns the value of the first non-null expression in the list. Create temporary projections: SELECT IMPLEMENT_TEMP_DESIGN('') Insert some values into table t: INSERT INTO t VALUES('2009-09-04 09:14:00-04'). Examples The following series of statements illustrates one use of the NULLIF function. Must have the same data type as expr1 or a type that can be implicitly cast to match expr1. INSERT INTO t VALUES('2010-09-04 09:14:00-04'). expr2 ) Parameters expr1 expr2 Is a value of any data type. the function returns null. expr2 ). then NVL returns expr2. Syntax NULLIF( expr1.SQL Functions NVL (page 193) NULLIF Compares expr1 and expr2. nullif(x. -193- .

Laura B. Luigi I. Duncan U. so NVL returns expr1: SELECT NVL('fast'. nvl ---------database (1 row) expr2 is null. null). Perkins | Dr. NVL(title.SQL Reference Manual • If expr1 is not null. customer_name | title ------------------------+------Alexander I. so NVL returns expr2: SELECT NVL(null. The arguments can have any data type consistent with SQL92 clause 9. 'database'). “Set operation result data types. Joseph P. Examples expr1 is not null. Samantha V. Carcetti | Dr. -194- . 'database'). King | Dr. Daniel R. Harris | Dr. NVL is equivalent to COALESCE except that NVL is called with only two arguments. Wilson | Mr. Kevin R. so NVL returns expr1: SELECT NVL('fast'. so NVL returns expr2 and substitutes 'Withheld' for the unknown values: SELECT customer_name. Li | Dr. Sanchez | Dr. Miller | Mr.” Implementation is equivalent to the CASE expression: CASE WHEN expr1 IS NULL THEN expr2 ELSE expr1 END. 'Withheld') as title FROM customer_dimension ORDER BY title. expr1 (title) contains nulls. Robinson | Dr. Notes • • • • COALESCE (page 191) is the more standard. nvl -----fast (1 row) In the following example. nvl -----fast (1 row) expr1 is null. then NVL returns expr1. Steve S. Lang | Dr. more general function.3. Meghan K.

Withheld Withheld Withheld See Also Case Expressions (page 73) COALESCE (page 191) ISNULL (page 192) NVL2 (page 195) NVL2 Takes three arguments. Ms. Examples In this example.” Implementation is equivalent to the CASE expression: CASE WHEN expr1 IS NOT NULL THEN expr2 ELSE expr3 END. 'database'). Notes Arguments two and three can have any data type consistent with SQL92 clause 9. then NVL2 returns expr3. Nguyen Emily E. expr1 is not null. nvl2 -----fast (1 row) In this example. Parameters • • If expr1 is not null. 'fast'.3. Goldberg Darlene K. 'database'). Farmer Bettercare Ameristar Initech (17 rows) | | | | | | | Mrs. “Set operation result data types. Syntax SELECT NVL2 ( expr1 . Ms. expr1 is null. expr2 . Harris Meghan J.SQL Functions Lauren D. If the first argument is not NULL. so NVL2 returns expr2: SELECT NVL2('very'. then NVL2 returns expr2. it returns the second argument. so NVL2 returns expr3: SELECT NVL2(null. The data types of the second and third arguments are implicitly cast to a common type if they don't agree. similar to COALESCE (page 191). Mrs. If expr1 is null. nvl2 ---------- -195- . 'fast'. otherwise it returns the third argument. expr3 ).

NVL2(title. Nguyen | Known Emily E. 'Withheld') as title FROM customer_dimension ORDER BY title. Sanchez | Known Duncan U. customer_name | title ------------------------+------Alexander I. 'Known'. King | Known Luigi I. Carcetti | Known Meghan K. expr1 (title) contains nulls. Harris | Known Meghan J. Goldberg | Known Darlene K. Miller | Known Lauren D. Farmer | Known Bettercare | Withheld Ameristar | Withheld Initech | Withheld (17 rows) See Also Case Expressions (page 73) COALESCE (page 191) NVL (page 191) -196- . Wilson | Known Kevin R. Perkins | Known Samantha V. Li | Known Laura B. so NVL2 returns expr3 ('Withheld') and also substitutes the non-null values with the expression 'Known': SELECT customer_name. Robinson | Known Joseph P. Harris | Known Daniel R.SQL Reference Manual database (1 row) In the following example. Lang | Known Steve S.

Examples Expression SELECT ASCII('A'). BINARY implicitly converts to VARBINARY. They treat each byte as a character. SELECT ASCII('ab'). Generally. Unless otherwise noted. SELECT ASCII(''). Some functions also exist natively for the bit-string types. or return information about strings. Syntax ASCII ( expression ) Parameters expression (VARCHAR) is the string to convert. and VARBINARY. or manipulation operations on strings. Strings in this context include values of the types CHAR. all of the functions listed below work on all of these types. ASCII Converts the first 8-bit byte of a VARCHAR to an INTEGER. but be wary of potential effects of the automatic padding when using the CHARACTER type. the functions described here also work on data of non-string types by converting that data to a string representation first. Result 65 97 -197- . SELECT ASCII(null). BINARY. Note: The string functions do not handle multibyte UTF-8 sequences correctly.197 String Functions String functions perform conversion. This section describes functions and operators for examining and manipulating string values. so functions that take VARBINARY arguments work with BINARY. extraction. Notes ASCII is the opposite of the CHR (page 201) function. VARCHAR.

This is also referred to as the population count. SELECT BIT_LENGTH(null::binary).SQL Reference Manual BIT_LENGTH Returns the length of the string expression in bits (bytes * 8) as an INTEGER. Examples Expression SELECT BIT_LENGTH('abc'::varbinary). SELECT BIT_LENGTH(VARBINARY 'abc'). SELECT BIT_LENGTH(''::varbinary). Notes BIT_LENGTH applies to the contents of VARCHAR and VARBINARY fields. LENGTH (page 211). SELECT BIT_LENGTH(VARBINARY(6) 'abc'). SELECT BIT_LENGTH(BINARY(6) 'abc'). SELECT BIT_LENGTH(CHAR(6) 'abc'). SELECT BIT_LENGTH(BINARY 'abc'). SELECT BIT_LENGTH('abc'::binary). Syntax BIT_LENGTH( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) is the string to convert. 24 24 48 24 48 24 24 24 Result 24 8 0 8 See Also CHARACTER_LENGTH (page 200). OCTET_LENGTH (page 214) BITCOUNT Returns the number of one-bits (sometimes referred to as set-bits) in the given VARBINARY value. -198- . SELECT BIT_LENGTH(VARCHAR 'abc'). SELECT BIT_LENGTH(null::varbinary). SELECT BIT_LENGTH(''::binary). SELECT BIT_LENGTH(CHAR 'abc'). SELECT BIT_LENGTH(VARCHAR(6) 'abc').

bitcount ---------1 (1 row) SELECT BITCOUNT(HEX_TO_BINARY('0xF0')). then the first character is treated as the low nibble of the first (furthest to the left) byte. Syntax BITSTRING_TO_BINARY ( expression ) Parameters expression (VARCHAR) is the string to return. This function is the inverse of TO_BITSTRING. BITSTRING_TO_BINARY(TO_BITSTRING(x)) = x) TO_BITSTRING(BITSTRING_TO_BINARY(x)) = x) Examples If there are an odd number of characters in the hex value. -199- .SQL Functions Syntax BITCOUNT ( expression ) Parameters expression (BINARY or VARBINARY) is the string to return. bitcount ---------4 (1 row) SELECT BITCOUNT(HEX_TO_BINARY('0xAB')) bitcount ---------5 (1 row) BITSTRING_TO_BINARY Translates the given VARCHAR bitstring representation into a VARBINARY value. Notes VARBINARY BITSTRING_TO_BINARY(VARCHAR) converts data from character type (in bitstring format) to binary type. Examples SELECT BITCOUNT(HEX_TO_BINARY('0x10')).

characters-to-remove ] ) Parameters expression characters-to-remove (CHAR or VARCHAR) is the string to modify (CHAR or VARCHAR) specifies the characters to remove. The default is the space character. bitstring_to_binary --------------------ab (1 row) If an invalid bitstring is supplied. It strips the padding from CHAR expressions but not from VARCHAR expressions. ERROR: invalid bitstring "010102010" BTRIM Removes the longest string consisting only of specified characters from the start and end of a string. RTRIM (page 220). 'xy'). btrim ------trim (1 row) See Also LTRIM (page 213). TRIM (page 226) CHARACTER_LENGTH Returns an INTEGER value representing the number of characters in a string. -200- .SQL Reference Manual SELECT BITSTRING_TO_BINARY('0110000101100010'). Syntax [ CHAR_LENGTH | CHARACTER_LENGTH ] ( expression ) Parameters expression (CHAR or VARCHAR) is the string to measure Notes CHARACTER_LENGTH is identical to LENGTH (page 211). Syntax BTRIM ( expression [ . the system returns an error: SELECT BITSTRING_TO_BINARY('010102010'). See BIT_LENGTH (page 198) and OCTET_LENGTH (page 214) for similar functions. Examples SELECT BTRIM('xyxtrimyyx'.

Notes CHR is the opposite of the ASCII (page 197) function. ?column? ---------t (1 row) CHR Converts an INTEGER to a 1-byte VARCHAR. Examples Expression SELECT CHR(65). Syntax CHR( expression ) Parameters expression (INTEGER) is masked to a single byte. SELECT CHR(null). SELECT CHR(65+32). char_length ------------4 (1 row) SELECT CHAR_LENGTH('1234 '::VARCHAR(10)). Result A a CLIENT_ENCODING Returns a VARCHAR value representing the character set encoding of the client system. Syntax CLIENT_ENCODING() -201- . char_length ------------6 (1 row) SELECT CHAR_LENGTH(NULL::CHAR(10)) IS NULL.SQL Functions Examples SELECT CHAR_LENGTH('1234 '::CHAR(10)).

. search.. result[ . search. client_encoding ----------------UTF-8 (1 row) DECODE Compares expr to each search value one by one.[. Is the value returned. Syntax SELECT DECODE( expr. This leads to a character string type. an exact numeric type. Parameters expression search result default Is the value to compare. the function returns the corresponding result. If no matches are found. “Set operation result data types.SQL Reference Manual Notes • • Vertica supports the UTF-8 character set. If no match is found. result ]. -202- . If default is omitted. Examples The following example converts numeric values in the weight column from the product_dimension table to descriptive values in the output. then DECODE returns NULL (if no matches are found). an approximate numeric type. Is optional. if expression is equal to search. or a DATETIME type. CLIENT_ENCODING returns the same value as the vsql meta-command \encoding and variable ENCODING Examples SELECT CLIENT_ENCODING(). DECODE returns default. If expr is equal to a search. Exact type conversion rules for determining the result type (of a CASE expression) are according to SQL92. the function returns null.3. clause 9. Is the value compared against expression.” The result types of individual results are promoted to the least common type that can be used to represent all of them. default ] ). If default is omitted. Usage DECODE is similar to the IF-THEN-ELSE and CASE (page 73) expression: CASE expr WHEN search THEN result [WHEN search THEN result] [ELSE default]. where all the various result arguments must be of the same type grouping. the function returns default.

'Light'. 50. product_description | case -----------------------------------+--------------Brand #499 canned corn | N/A Brand #49900 fruit cocktail | Medium Brand #49837 canned tomatoes | Heavy Brand #49782 canned peaches | N/A Brand #49805 chicken noodle soup | N/A Brand #49944 canned chicken broth | N/A Brand #49819 canned chili | N/A Brand #49848 baked beans | N/A Brand #49989 minestrone soup | N/A Brand #49778 canned peaches | N/A Brand #49770 canned peaches | N/A Brand #4977 fruit cocktail | N/A Brand #49933 canned olives | N/A Brand #49750 canned olives | Call for help Brand #49777 canned tomatoes | N/A (15 rows) GREATEST Returns the largest value in a list of expressions. See Examples.SQL Functions SELECT 2. 5. Parameters expr1. 9). 'N/A') FROM product_dimension WHERE category_description = 'Food' AND department_description = 'Canned Goods' AND sku_number BETWEEN 'SKU-#49750' AND 'SKU-#49999' LIMIT 15. expr_n ).. A NULL value in any one of the expressions returns NULL. Syntax GREATEST( expr1. Examples This example returns 9 as the greatest in the list of expressions: SELECT GREATEST(7. DECODE(weight. 'Medium'. Notes • • Works for all data types. 99. greatest ---------9 -203- . 'Call for help'. and expr_n are the expressions to be evaluated. . expr2. expr2.. product_description. and implicitly casts similar types. 'Heavy'. 71.

'database'. 'analytic'. greatest ---------(1 row) And one more: SELECT GREATEST('sit'. 'database'). -204- . '9'). Syntax HEX_TO_BINARY( [ 0x ] expression ) Parameters expression (BINARY or VARBINARY) is the string to translate.5 as the greatest because the integer is implicitly cast to float: SELECT GREATEST(1. 'site'. 'sight'). greatest ---------vertica (1 row) Notice this next command returns NULL: SELECT GREATEST('vertica'. 1. greatest ---------1. '5'. greatest ---------9 (1 row) The next example returns FLOAT 1. greatest ---------site (1 row) See Also LEAST (page 209) HEX_TO_BINARY Translates the given VARCHAR hexadecimal representation into a VARBINARY value.SQL Reference Manual (1 row) Note that putting quotes around the integer expressions returns the same result as the first example: SELECT GREATEST('7'. null).5).5 (1 row) The following example returns 'vertica' as the greatest: SELECT GREATEST('vertica'. 'analytic'.

INET_ATON(VARCHAR A) -> INT8 I -205- . the first character is treated as the low nibble of the first (furthest to the left) byte. For example: SELECT HEX_TO_BINARY('0x6162') AS hex1.SQL Functions 0x Is optional prefix Notes VARBINARY HEX_TO_BINARY(VARCHAR) converts data from character type in hexadecimal format to binary type.org/onlinepubs/007908775/xns/ntohl.html. HEX_TO_BINARY(TO_HEX(x)) = x) TO_HEX(HEX_TO_BINARY(x)) = x) If there are an odd number of characters in the hexadecimal value.html. returns an integer that represents the value of the address in host byte order. Usage The following syntax converts an IPv4 address represented as the string A to an integer I. This function is the inverse of TO_HEX (page 225). Examples If the given string begins with "0x" the prefix is ignored. for example: SELECT HEX_TO_BINARY('0xffgf').opengroup. and converts the result from network byte order to host byte order using ntohl http://opengroup. ERROR: invalid hex string "0xffgf" See Also TO_HEX (page 225) INET_ATON Given the dotted-quad representation of a network address as a string. INET_ATON trims any spaces from the right of A.org/onlinepubs/000095399/functions/inet_ntop. calls the Linux function inet_pton http://www. Vertica returns an “invalid binary representation" error. hex1 | hex2 ------+-----ab | ab (1 row) If an invalid hex value is given. Syntax INET_ATON( expression ) Parameters expression (VARCHAR) is the string to convert. HEX_TO_BINARY('6162') AS hex2.

org/onlinepubs/000095399/functions/inet_ntop.4'). inet_aton -----------3520061480 (1 row) SELECT INET_ATON('1.html. INET_NTOA(INT8 I) -> VARCHAR A If I is NULL. too long. inet_aton ----------16909060 (1 row) SELECT TO_HEX(INET_ATON('1. returns the dotted-quad representation of the address as a VARCHAR. the result is NULL. the result is NULL. Examples The generated number is always in host byte order. -206- .4')).opengroup. In the following example. Syntax INET_NTOA( expression ) Parameters expression (INTEGER) is the network address to convert.3.224. and calls the Linux function inet_ntop http://www.3.2. greater than 2^32 or negative. to_hex --------1020304 (1 row) See Also INET_NTOA (page 206) INET_NTOA Given a network address as an integer in network byte order.org/onlinepubs/007908775/xns/htonl. SELECT INET_ATON('209. the number is calculated as 209×256^3 + 207×256^2 + 224×256 + 40.207.html. or inet_pton returns an error.40').SQL Reference Manual If A is NULL. Usage The following syntax converts an IPv4 address represented as integer I to a string A. INET_NTOA converts I from host byte order to network byte order using htonl http://opengroup. Examples SELECT INET_NTOA(16909060).2.

Examples Expression SELECT INITCAP('high speed database'). SELECT INITCAP(null).28. Result High Speed Database Linux Tutorial Abc Def 123avc 124btd.2. Syntax SELECT INSTR( string .46. occurrence ] ] ) Parameters string substring Is the text expression to search.138 (1 row) See Also INET_ATON (page 205) INITCAP Capitalizes first letter of each alphanumeric word and puts the rest in lowercase. SELECT INITCAP(''). SELECT INITCAP('abc DEF 123aVC 124Btd.3. -207- .Last INSTR Searches string for substring and returns an integer indicating the position of the character in string that is the first character of this occurrence. position [.4 (1 row) SELECT INET_NTOA(03021962). SELECT INITCAP('LINUX TUTORIAL'). substring [.lAsT'). inet_ntoa ------------0. Syntax INITCAP( expression ) Parameters expression (VARCHAR) is the string to format.SQL Functions inet_ntoa ----------1. Is the string to search for.

'b'). SELECT INSTR('abc'. which it finds in the second position of the search string. regardless of the value of position. and searches backward for substring ‘bc’. instr ------2 (1 row) The following three examples use character position to search backward to find the position of a substring. then Vertica counts backward from the end of string and then searches backward from the resulting position. SELECT INSTR('abcb'. In the first example. and position cannot be 0. the function counts backward one character from the end of the string. and is expressed in characters. if substring does not appear occurrence times after the position character of string. If position is negative. The return value is relative to the beginning of string. The default values of both parameters are 1. Examples The first example searches forward in string ‘abc’ for substring ‘b’. -1). the function counts backward one character from the end of the string. Is an integer indicating which occurrence of string Vertica should search. the default search starts at ‘a’. and the default is 1. meaning Vertica begins searching at the first character of string for the first occurrence of substring. If the search is unsuccessful (that is. even though the search happens in reverse (from the end — or right side — of the string). the position of n occurrence is read left to right in the sting. instr ------1 (1 row) In the second example. or position 2. 'a'. occurrence Usage Both position and occurrence must be of types that can resolve to an integer. SELECT INSTR('abc'. -1). starting with character ‘b’. then the return value is 0. instr ------- -208- . which it finds it in the first position in the search string.SQL Reference Manual position Is a nonzero integer indicating the character of string where Vertica begins the search. Because no position parameters are given. The first character of string occupies the default position 1. starting with character ‘c’. The value of occurrence must be positive (greater than 0). position 1. The function then searches backward for the first occurrence of ‘a’. The search returns the position in ‘abc’ where ‘b’ occurs. Note: Although it seems intuitive that the result should be a negative integer. 'bc'.

so '10' would be less than '2'. starting with character ‘b’. instr ------0 (1 row) LEAST Returns the smallest value in a list of expressions.. least ------1. expr2. A NULL value in any one of the expressions returns NULL. Syntax LEAST( expr1. -1). expr2.. 9). as INTEGER 2 is implicitly cast to FLOAT: SELECT LEAST(2. Examples This example returns 5 as the least: SELECT LEAST(7. expr_n ). '5'. 'bcef'. least ------5 (1 row) In the above example.5. and implicitly casts similar types.SQL Functions 2 (1 row) In the third example. which it does not find. and expr_n are the expressions to be evaluated. least ------5 (1 row) Note that putting quotes around the integer expressions returns the same result as the first example: SELECT LEAST('7'. Notes • • Works for all data types. . the values are being compared as strings. See Examples. SELECT INSTR('abcb'.5 -209- . The result is 0. 1. '9'). the function counts backward one character from the end of the string. The next example returns 1. 5. and searches backward for substring ‘bcef’.5). Parameters expr1.

length ) Parameters string length (CHAR or VARCHAR or BINARY or VARBINARY) is the string to return. 'database'). 'database'. null). left -----ver (1 row) -210- . 'analytic'. 'analytic'. Examples SELECT LEFT('vertica'. 'site'. Is an INTEGER value that specifies the count of characters or bytes to return. least ---------analytic (1 row) Notice this next command returns NULL: SELECT LEAST('vertica'.SQL Reference Manual (1 row) The following example returns 'analytic' as the least: SELECT LEAST('vertica'. 'sight'). least ------(1 row) And one more: SELECT LEAST('sit'. Syntax LEFT ( string . 3). least ------sight (1 row) See Also GREATEST (page 203) LEFT Returns the length leftmost character string or binary data type depending on the data type of the given input.

2). Syntax LENGTH ( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) is the string to measure Notes LENGTH strips the padding from CHAR expressions but not from VARCHAR expressions. Result 4 6 10 6 t SELECT LENGTH(NULL::CHAR(10)) IS NULL. left -----ab (1 row) SELECT LEFT(TO_BITSTRING(HEX_TO_BINARY('0x10')). '::BINARY(10)). See BIT_LENGTH (page 198) and OCTET_LENGTH (page 214) for similar functions. LENGTH is identical to CHARACTER_LENGTH (page 200). Examples Expression SELECT LENGTH('1234 SELECT LENGTH('1234 SELECT LENGTH('1234 SELECT LENGTH('1234 '::CHAR(10)). LOWER Returns a VARCHAR value containing the argument converted to lower case letters. '::VARCHAR(10)). '::VARBINARY(10)). -211- . left -----0001 (1 row) See Also SUBSTR (page 222) LENGTH Takes one argument as an input and returns returns an INTEGER value representing the number of characters in a string.SQL Functions SELECT LEFT(HEX_TO_BINARY('0x6162'). 4).

length [ . lpad -----------establishm (1 row) -212- . Syntax LPAD( expression . lower ---------abcdefg (1 row) SELECT LOWER('The Cat In The Hat'). The default is the space character. lower -------------------the cat in the hat (1 row) LPAD Returns a VARCHAR value representing a string of a specific length filled on the left with specific characters. fill ] ) Parameters expression length fill (CHAR OR VARCHAR) specifies the string to fill (INTEGER) specifies the number of characters to return (CHAR OR VARCHAR) specifies the repeating string of characters with which to fill the output string.SQL Reference Manual Syntax LOWER ( expression ) Parameters expression (CHAR or VARCHAR) is the string to convert Examples SELECT LOWER('AbCdEfG'). 'abc'). Examples SELECT LPAD('database'. 15. 10. lpad ----------------xzyxzyxdatabase (1 row) If the string is already longer than the specified length it is truncated on the right: SELECT LPAD('establishment'. 'xzy').

characters ] ) Parameters expression characters (CHAR or VARCHAR) is the string to trim (CHAR or VARCHAR) specifies the characters to remove from the left side of expression. md5 ---------------------------------fc45b815747d8236f9f6fdb9c2c3f676 (1 row) -213- . 'xyz'). Examples SELECT LTRIM('zzzyyyyyyxxxxxxxxtrim'. The default is the space character. Syntax LTRIM ( expression [ . Examples SELECT MD5('123'). RTRIM (page 220). Syntax MD5( string ) Parameters string Is the argument string. ltrim ------trim (1 row) See Also BTRIM (page 200).SQL Functions LTRIM Returns a VARCHAR value representing a string with leading blanks removed from the left side (beginning). md5 ---------------------------------202cb962ac59075b964b07152d234b70 (1 row) SELECT MD5('Vertica'::bytea). TRIM (page 226) MD5 Calculates the MD5 hash of string. returning the result as a VARCHAR string in hexadecimal.

SELECT OCTET_LENGTH(BINARY(6) 'abc'). Result 10 6 3 3 6 0 1 SELECT OCTET_LENGTH(VARCHAR(10) '1234 SELECT OCTET_LENGTH('abc'::VARBINARY). LENGTH (page 211) OVERLAY Returns a VARCHAR value representing a string having had a substring replaced by another string. SELECT OCTET_LENGTH(''::BINARY). SELECT OCTET_LENGTH(null::BINARY). '). SELECT OCTET_LENGTH(null::VARBINARY). returns the current actual number of bytes for VARCHAR and VARBINARY.SQL Reference Manual OCTET_LENGTH Returns an INTEGER value representing the maximum number of bytes in a string for CHAR and BINARY. See Also BIT_LENGTH (page 198). Syntax OVERLAY ( expression1 PLACING expression2 FROM position [ FOR extent ] ) Parameters expression1 expression2 position (CHAR or VARCHAR) is the string to process (CHAR or VARCHAR) is the substring to overlay (INTEGER) is the character position (counting from one) at which to -214- . CHARACTER_LENGTH (page 200). Syntax OCTET_LENGTH ( expression ) Parameters expression (CHAR or VARCHAR or BINARY or VARBINARY) is the string to measure Examples Expression SELECT OCTET_LENGTH(CHAR(10) '1234 '). SELECT OCTET_LENGTH(VARBINARY 'abc'). SELECT OCTET_LENGTH(VARBINARY '').

Syntax POSITION ( substring IN string ) Parameters substring string (CHAR or VARCHAR) is the substring to locate (CHAR or VARCHAR) is the string in which to locate the substring Notes POSITION is identical to STRPOS (page 222) except for the order of the arguments. PLACING 'xxx' FROM 2 FOR 5). POSITION Returns an INTEGER values representing the location of a specified substring with a string (counting from one). position ---------3 (1 row) -215- .SQL Functions begin the overlay extent (INTEGER) specifies the number of characters to replace with the overlay Examples SELECT OVERLAY('123456789' overlay ----------1xxx56789 (1 row) SELECT OVERLAY('123456789' overlay ---------1xxx6789 (1 row) SELECT OVERLAY('123456789' overlay --------1xxx789 (1 row) SELECT OVERLAY('123456789' overlay --------1xxx89 (1 row) PLACING 'xxx' FROM 2). PLACING 'xxx' FROM 2 FOR 4). Examples SELECT POSITION('3' IN '1234'). PLACING 'xxx' FROM 2 FOR 6).

Quotes are added only if necessary. QUOTE_IDENT ------------------------"Vertica ""!"" database" (1 row) The following example uses the SQL keyword. and references to them are resolved using case-insensitive compares. such as table and column names. even those not currently being used. Vertica quotes all currently-reserved keywords. QUOTE_IDENT -------------------"Vertica database" (1 row) Embedded double quotes are doubled: SELECT QUOTE_IDENT('Vertica "!" database'). Embedded double quotes are doubled. Syntax QUOTE_IDENT( string ) Parameters string Is the argument string. results are double quoted: SELECT QUOTE_IDENT('select'). "Next week" and "Select". QUOTE_IDENT ------------VErtIcA (1 row) SELECT QUOTE_IDENT('Vertica database'). if the string contains non-identifier characters. you do not need to double-quote mixed-case identifiers. and Vertica does not supply the quotes: SELECT QUOTE_IDENT('VErtIcA'). to be used as an identifier (page 55) in a SQL statement string. SELECT.SQL Reference Manual QUOTE_IDENT Returns the given string. that is. Notes • • SQL identifiers. is a SQL keyword (page 51). Thus. are stored as created. Examples Quoted identifiers are case-insensitive. such as "1time". suitably quoted. QUOTE_IDENT ------------"select" (1 row) -216- .

to be used as a string literal in a SQL statement string. Notes Vertica's use of backslashes in this context is not SQL compliant and is subject to change. Syntax QUOTE_LITERAL( string ) Parameters string Is the argument string. If the return value is truncated the given value might not be repeated count times. suitably quoted. Embedded single quotes and backslashes are doubled. and the last occurrence of the given value might be truncated. Examples mydb=> SELECT QUOTE_LITERAL('O''Reilly'). Syntax REPEAT ( string . quote_literal --------------'O''Reilly' (1 row) REPEAT Given a value and a count this function returns a VARCHAR or VARBINARY value that repeats the given value COUNT times.SQL Functions QUOTE_LITERAL Returns the given string. repetitions ) Parameters string repetitions (CHAR or VARCHAR or BINARY or VARBINARY) is the string to repeat (INTEGER) is the number of times to repeat the string -217- .

SQL Reference Manual

Notes If the repetitions field depends on the contents of a column (is not a constant), then the repeat operator maximum length is 65000 bytes. You can add a cast of the repeat to cast the result down to a size big enough for your purposes (reflects the actual maximum size) so you can do other things with the result. If you run the following example, you get an error message:
SELECT '123456' || REPEAT('a', colx); ERROR: Operator || may give a 65006-byte Varchar result; the limit is 65000 bytes.

If you know that colx can never be greater than 3, the solution is to add a cast (::VARCHAR(3)):
SELECT '123456' || REPEAT('a', colx)::VARCHAR(3);

If colx is greater than 3, the repeat is truncated to exactly three (3) a's. Examples
SELECT REPEAT ('1234', 5); repeat ---------------------12341234123412341234 (1 row) SELECT REPEAT ('vmart', 3); repeat ----------------vmartvmartvmart (1 row)

REPLACE
Replaces all occurrences of characters in a string with another set of characters. Syntax
REPLACE ( string , target , replacement )

Parameters
string target replacement (CHAR OR VARCHAR) is the string to which to perform the replacement (CHAR OR VARCHAR) is the string to replace (CHAR OR VARCHAR) is the string with which to replace the target

Examples
SELECT REPLACE('Documentation%20Library', '%20', ' ');

-218-

SQL Functions replace ----------------------Documentation Library (1 row) SELECT REPLACE('This & That', '&', 'and'); replace --------------This and That (1 row)

RIGHT
Returns the length rightmost character string or binary data type depending on the data type of the given input. Syntax
RIGHT( string , length )

Parameters
string length (CHAR or VARCHAR or BINARY or VARBINARY) is the string to return. Is an INTEGER value that specifies the count of characters or bytes to return.

Examples The following command returns the last three characters of the string 'vertica':
SELECT RIGHT('vertica', 3); right ------ica (1 row) SELECT RIGHT('ab'::binary(4), 2); right ---------\000\000 (1 row) SELECT RIGHT(TO_BITSTRING(HEX_TO_BINARY('0x10')), 4); right ------0000 (1 row)

See Also SUBSTR (page 222)

-219-

SQL Reference Manual

RPAD
Returns a VARCHAR value representing a string of a specific length filled on the right with specific characters. Syntax
RPAD ( expression , length [ , fill ] )

Parameters
expression length fill (CHAR OR VARCHAR) specifies the string to fill (INTEGER) specifies the number of characters to return (CHAR OR VARCHAR) specifies the repeating string of characters with which to fill the output string. The default is the space character.

Examples
SELECT RPAD('database', 15, 'xzy'); rpad ----------------databasexzyxzyx (1 row)

If the string is already longer than the specified length it is truncated on the right:
SELECT RPAD('database', 6, 'xzy'); rpad -------databa (1 row)

RTRIM
Returns a VARCHAR value representing a string with trailing blanks removed from the right side (end). Syntax
RTRIM ( expression [ , characters ] )

Parameters
expression characters (CHAR or VARCHAR) is the string to trim (CHAR or VARCHAR) specifies the characters to remove from the right side of expression. The default is the space character.

Examples
SELECT RTRIM('trimzzzyyyyyyxxxxxxxx', 'xyz'); ltrim ------trim

-220-

SQL Functions (1 row)

See Also BTRIM (page 200), LTRIM (page 213), TRIM (page 226)

SPLIT_PART
Splits string on the delimiter and returns the given field (counting from one). Syntax
SPLIT_PART( string , delimiter , field )

Parameters
string delimiter field Is the argument string. Is the given delimiter. (INTEGER) is the number of the part to return.

Examples The specified integer of 2 returns the second string, or def.
SELECT SPLIT_PART('abc~@~def~@~ghi', '~@~', 2); split_part -----------def (1 row)

Here, we specify 3, which returns the third string, or 789.
SELECT SPLIT_PART('123~|~456~|~789', '~|~', 3); split_part -----------789 (1 row)

Note that the tildas are for readability only. Omitting them returns the same results: SELECT SPLIT_PART('123|456|789', '|', 3);
split_part -----------789 (1 row)

See what happens if you specify an integer that exceeds the number of strings: No results.
SELECT SPLIT_PART('123|456|789', '|', 4); split_part -----------(1 row)

The above result is not null, it is the empty string.

-221-

SQL Reference Manual SELECT SPLIT_PART('123|456|789', '|', 4) IS NULL; ?column? ---------f (1 row)

If SPLIT_PART had returned NULL, LENGTH would have returned null.
SELECT LENGTH (SPLIT_PART('123|456|789', '|', 4)); length -------0 (1 row)

STRPOS
Returns an INTEGER value representing the location of a specified substring within a string (counting from one). Syntax
STRPOS ( string , substring )

Parameters
string substring (CHAR or VARCHAR) is the string in which to locate the substring (CHAR or VARCHAR) is the substring to locate

Notes STRPOS is identical to POSITION (page 215) except for the order of the arguments. Examples
SELECT STRPOS('1234','3'); strpos -------3 (1 row)

SUBSTR
SUBSTR returns a VARCHAR value representing a substring of a specified string. Syntax
SUBSTR ( string , position [ , extent ] )

Parameters
string position (CHAR or VARCHAR or BINARY or VARBINARY) is the string from which to extract a substring. (INTEGER) is the starting position of the substring (counting from

-222-

SQL Functions one). extent (INTEGER) is the length of the substring to extract. The default is the end of the string.

Notes SUBSTR performs the same function as SUBSTRING (page 223). The only difference is the syntax allowed. Examples
SELECT SUBSTR('123456789', 3, 2); substr -------34 (1 row) SELECT SUBSTR('123456789', 3); substr --------3456789 (1 row) SELECT SUBSTR(TO_BITSTRING(HEX_TO_BINARY('0x10')), 2, 2); substr -------00 (1 row) SELECT SUBSTR(TO_HEX('10010'), 2, 2); substr -------71 (1 row)

SUBSTRING
Given a value, a position, and an optional length, returns a value representing a substring of the specified string at the given position. Syntax
SUBSTRING ( string , position [ , length ] ) SUBSTRING ( string FROM position [ FOR length ] )

Parameters
string position (CHAR or VARCHAR or BINARY or VARBINARY) is the string from which to extract a substring (INTEGER) is the starting position of the substring (counting from one). If position is greater than the length of the given value, an empty value is returned.

-223-

SQL Reference Manual length

(INTEGER) is the length of the substring to extract. The default is the end of the string.If a length is given the result is at most that many bytes. The maximum length is the length of the given value less the given position. If no length is given or if the given length is greater than the maximum length then the length is set to the maximum length.

Notes SUBSTRING performs the same function as SUBSTR (page 222). The only difference is the syntax allowed. Neither length nor position can be negative, and the position cannot be zero because it is one based. If these forms are violated, the system returns an error:
SELECT SUBSTRING('ab'::binary(2), -1, 2); ERROR: negative or zero substring start position not allowed

Examples
SELECT SUBSTRING('123456789', 3, 2); substring ----------34 (1 row) SELECT SUBSTRING('123456789' FROM 3 FOR 2); substring ----------34 (1 row)

TO_BITSTRING
Returns a VARCHAR that represents the given VARBINARY value in bitstring format Syntax
TO_BITSTRING( expression )

Parameters
expression (VARCHAR) is the string to return.

Notes VARCHAR TO_BITSTRING(VARBINARY) converts data from binary type to character type (where the character representation is the bitstring format). This function is the inverse of BITSTRING_TO_BINARY:
TO_BITSTRING(BITSTRING_TO_BINARY(x)) = x) BITSTRING_TO_BINARY(TO_BITSTRING(x)) = x)

-224-

SQL Functions

Examples
SELECT TO_BITSTRING('ab'::BINARY(2)); to_bitstring -----------------0110000101100010 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0x10')); to_bitstring -------------00010000 (1 row) SELECT TO_BITSTRING(HEX_TO_BINARY('0xF0')); to_bitstring -------------11110000 (1 row)

See Also BITCOUNT (page 198) and BITSTRING_TO_BINARY (page 199)

TO_HEX
Returns a VARCHAR or VARBINARY representing the hexadecimal equivalent of a number. Syntax
TO_HEX ( number )

Parameters
number (INTEGER) is the number to convert to hexadecimal

Notes VARCHAR TO_HEX(INTEGER) and VARCHAR TO_HEX(VARBINARY) are similar. The function converts data from binary type to character type (where the character representation is in hexadecimal format). This function is the inverse of HEX_TO_BINARY.
TO_HEX(HEX_TO_BINARY(x)) = x). HEX_TO_BINARY(TO_HEX(x)) = x).

Examples
SELECT TO_HEX(123456789); to_hex --------75bcd15 (1 row)

For VARBINARY inputs, the returned value is not preceded by "0x". For example:
SELECT TO_HEX('ab'::binary(2)); to_hex

-225-

SQL Reference Manual -------6162 (1 row)

TRANSLATE
Replaces individual characters in string_to_replace with other characters. Syntax
SELECT TRANSLATE ( string_to_replace , from_string , to_string );

Parameters
string_to_replace from_string to_string Is the string to be translated. Contains characters that should be replaced in string_to_replace. Any character in string_to_replace that matches a character in from_string is replaced by the corresponding character in to_string.

Example
SELECT TRANSLATE('12345', '14', 'zq'); translate ----------z23q5 (1 row) SELECT TRANSLATE('simple', 'i', 'a'); translate ----------sample (1 row)

TRIM
Combines the BTRIM, LTRIM, and RTRIM functions into a single function. Syntax
TRIM ( [ [ LEADING | TRAILING | BOTH ] characters FROM ] expression )

Parameters
LEADING TRAILING BOTH Removes the specified characters from the left side of the string Removes the specified characters from the right side of the string Removes the specified characters from both sides of the string (default)

-226-

SQL Functions characters expression

(CHAR or VARCHAR) specifies the characters to remove from expression. The default is the space character. (CHAR or VARCHAR) is the string to trim

Examples
SELECT '-' || TRIM(LEADING 'x' FROM 'xxdatabasexx') || '-'; ?column? --------------databasexx(1 row) SELECT '-' || TRIM(TRAILING 'x' FROM 'xxdatabasexx') || '-'; ?column? --------------xxdatabase(1 row) SELECT '-' || TRIM(BOTH 'x' FROM 'xxdatabasexx') || '-'; ?column? ------------database(1 row) SELECT '-' || TRIM('x' FROM 'xxdatabasexx') || '-'; ?column? ------------database(1 row) SELECT '-' || TRIM(LEADING FROM ' database ') || '-'; ?column? --------------database (1 row) SELECT '-' || TRIM(' database ') || '-'; ?column? ------------database(1 row)

See Also BTRIM (page 200), LTRIM (page 213), RTRIM (page 220)

UPPER
Returns a VARCHAR value containing the argument converted to upper case letters. Syntax
UPPER ( expression )

Parameters
expression (CHAR or VARCHAR) is the string to convert

-227-

V6_ATON trims any spaces from the right of A and calls the Linux function inet_pton http://www. too long. or if inet_pton returns an error. to_hex ---------------------------------20010db80000000000080800200c417a (1 row) SELECT V6_ATON('1.2. V6_ATON(VARCHAR A) -> VARBINARY(16) B If A has no colons it is prepended with '::ffff:'. v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\377\377\001\002\003\004 (1 row) SELECT V6_ATON('::1.SQL Reference Manual Examples SELECT UPPER('AbCdEfG').3.org/onlinepubs/000095399/functions/inet_ntop.4').2. v6_aton -----------------------------------------------------\001\015\270\000\000\000\000\000\010\010\000 \014Az (1 row) SELECT TO_HEX(V6_ATON('2001:DB8::8:800:200C:417A')). v6_aton -----------------------------------------------------------------\000\000\000\000\000\000\000\000\000\000\000\000\001\002\003\004 -228- .html. If A is NULL.opengroup. Syntax V6_ATON ( expression ) Parameters expression (VARCHAR) is the string to convert. the result is NULL. Notes The following syntax converts an IPv6 address represented as the character string A to a binary string B.4'). upper ---------ABCDEFG (1 row) V6_ATON Converts an IPv6 address represented as a character string to a binary string.3. Examples SELECT V6_ATON('2001:DB8::8:800:200C:417A').

4')). v6_ntoa ----------- -229- .html.2. if necessary.2. Notes The following syntax converts an IPv6 address represented as VARBINARY B to a string A. the result is NULL. V6_NTOA(VARBINARY B) -> VARCHAR A If B is NULL or longer than 16 bytes.org/onlinepubs/000095399/functions/inet_ntop.3.4'.4' to '1.3. Examples SELECT V6_NTOA(' \001\015\270\000\000\000\000\000\010\010\000 \014Az').3.2. V6_NTOA right-pads B to 16 bytes with zeros.4')). Note: Vertica automatically converts the form '::ffff:1. Syntax V6_NTOA ( expression ) Parameters expression (VARBINARY) is the binary string to convert.2.SQL Functions (1 row) See Also V6_NTOA (page 229) V6_NTOA Converts an IPv6 address represented as varbinary to a character string.2.4 (1 row) SELECT V6_NTOA(V6_ATON('::1. v6_ntoa --------1. and calls the Linux function inet_ntop http://www.3.3. v6_ntoa --------------------------2001:db8::8:800:200c:417a (1 row) SELECT V6_NTOA(V6_ATON('1.opengroup.

Notes The following syntax calculates a subnet address in CIDR format from a binary or varchar IPv6 address. INT8 N) -> VARCHAR C The following syntax calculates a subnet address in CIDR format from an alphanumeric IPv6 address.3. V6_SUBNETA(VARCHAR A.4 (1 row) See Also N6_ATON (page 228) V6_SUBNETA Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a binary or alphanumeric IPv6 address. 28). (INTEGER) is the size of the subnet. V6_SUBNETA masks a binary IPv6 address B so that the N leftmost bits form a subnet address. INT8 N) -> V6_SUBNETA(V6_ATON(A). Syntax V6_SUBNETA( expression1 . It then converts to an alphanumeric IPv6 address. while the remaining rightmost bits are cleared. expression2 ) Parameters expression1 expression2 (VARBINARY or VARCHAR) is the string to calculate.2. -230- . N) -> VARCHAR C Examples SELECT V6_SUBNETA(V6_ATON('2001:db8::8:800:200c:417a').SQL Reference Manual ::1. appending a slash and N. v6_subneta --------------2001:db0::/28 (1 row) See Also V6_SUBNETN (page 230) V6_SUBNETN Calculates a subnet address in CIDR (Classless Inter-Domain Routing) format from a varbinary or alphanumeric IPv6 address. V6_SUBNETA(BINARY B.

SQL Functions Syntax V6_SUBNETN( expression1 . -231- . Notes The following syntax masks a BINARY IPv6 address B so that the N left-most bits of S form a subnet address. if necessary and masks B. INT8 N) -> V6_SUBNETN(V6_ATON(A). N) -> VARBINARY(16) S Example SELECT V6_SUBNETN(V6_ATON('2001:db8::8:800:200c:417a'). 28). while the remaining right-most bits are cleared.org/wiki/Classless_Inter-Domain_Routing notation (CIDR notation). or if N is not between 0 and 128 inclusive. while the remaining rightmost bits are cleared. INT8 N) -> VARBINARY(16) S If B is NULL or longer than 16 bytes. the result is NULL. expression2 ) Parameters expression1 expression2 (VARBINARY or VARCHAR or INTEGER) is the string to calculate. Syntax V6_TYPE( expression ) Parameters expression (VARBINARY or VARCHAR) is the type to convert. v6_subnetn --------------------------------------------------------------\001\015\260\000\000\000\000\000\000\000\000\000\000\000\000 See Also V6_SUBNETA (page 230) V6_TYPE Characterizes a binary or alphanumeric IPv6 address B as an integer type. Note: S = [B]/N in Classless Inter-Domain Routing http://en. V6_SUBNETN(VARCHAR A.wikipedia. V6_SUBNETN right-pads B to 16 bytes with zeros. The following syntax masks an alphanumeric IPv6 address A so that the N leftmost bits form a subnet address. preserving its N-bit subnet prefix. (INTEGER) is the size of the subnet. V6_SUBNETN(VARBINARY B.

V6_TYPE(VARCHAR A) -> V6_TYPE(V6_ATON(A)) -> INT8 T The IPv6 types are defined in the Network Working Group's IP Version 6 Addressing Architecture memo http://www.0/4 others UNSPECIFIED LINKLOCAL LOOPBACK LINKLOCAL LINKLOCAL LINKLOCAL MULTICAST GLOBAL IPv6: ::0/128 ::1/128 fe80::/10 ff00::/8 others UNSPECIFIED LOOPBACK LINKLOCAL MULTICAST GLOBAL See Also INET_ATON (page 205) IP Version 6 Addressing Architecture http://www.0.org/rfc/rfc4291.0.0/8 10.0/8 127.0.254.0. Private-Use is grouped with Link-Local.org/assignments/ipv4-address-space -232- .txt. if necessary. If B is VARBINARY. Details IPv4 (either kind): 0.0/8 169. If B is NULL or longer than 16 bytes. it is right-padded to 16 bytes with zeros.SQL Reference Manual Notes V6_TYPE(VARBINARY B) returns INT8 T.16.ietf.iana.0.0. • • • For IPv4. GLOBAL = 0 LINKLOCAL = 1 LOOPBACK = 2 UNSPECIFIED = 3 MULTICAST = 4 Global unicast addresses Link-Local unicast (and Private-Use) addresses Loopback Unspecified Multicast IPv4-mapped and IPv4-compatible IPv6 addresses are also interpreted.org/assignments/ipv4-address-space.168.0/16 172.ietf.txt IPv4 Global Unicast Address Assignments http://www. the result is NULL.0.0.0.0.0/16 224.org/rfc/rfc4291.0/12 192.iana.0. as specified in IPv4 Global Unicast Address Assignments http://www.

v6_type --------1 (1 row) SELECT V6_TYPE(V6_ATON('2001:db8::8:800:200c:417a')).10')).2.168.SQL Functions Examples SELECT V6_TYPE(V6_ATON('192. v6_type --------0 (1 row) -233- .

Examples SELECT CURRENT_SCHEMA(). but users can view only information about their own. current_database -----------------vmartschema (1 row) The following command returns the same results without the parentheses: SELECT CURRENT_DATABASE. CURRENT_DATABASE Returns a VARCHAR value containing the name of the database to which you are connected. Syntax CURRENT_DATABASE() Notes The CURRENT_DATABASE function does not require parentheses. current_schema ---------------public (1 row) The following command returns the same results without the parentheses: -234- . current_database -----------------vmartschema (1 row) CURRENT_SCHEMA Returns the name of the current schema. Examples SELECT CURRENT_DATABASE(). current sessions.234 System Information Functions These functions provide system information regarding user sessions. The superuser has unrestricted access to all system information. Syntax CURRENT_SCHEMA() Notes The CURRENT_SCHEMA function does not require parentheses.

current_user -------------dbadmin (1 row) HAS_TABLE_PRIVILEGE Returns a true/false value indicating whether or not a user can access a table in a particular way.SQL Functions SELECT CURRENT_SCHEMA. Syntax HAS_TABLE_PRIVILEGE ( [ user. current_schema ---------------public (1 row) See Also SET (http://www. ] table . current_user -------------dbadmin (1 row) The following command returns the same results without the parentheses: SELECT CURRENT_USER. privilege ) Parameters user table privilege specifies the name of a database user.postgresql.html) CURRENT_USER Returns a VARCHAR containing the name of the user who initiated the current database connection. Examples SELECT CURRENT_USER().0/static/sql-set. SELECT Allows the user to SELECT from any column of the specified table. specifies the name of a table in the logical schema. This function is useful for permission checking and is equivalent to SESSION_USER (page 236) and USER (page 237). Syntax CURRENT_USER() Notes • • The CURRENT_USER function does not require parentheses.org/docs/8. -235- . The default is the CURRENT_USER (page 235).

has_table_privilege --------------------t (1 row) SESSION_USER Returns a VARCHAR containing the name of the user who initiated the current database session. 'store. 'INSERT'). 'SELECT'). Is equivalent to CURRENT_USER (page 235) and USER (page 237).store_dimension'. Examples SELECT SESSION_USER(). has_table_privilege --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('release'.store_dimension'. has_table_privilege --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('store.store_dimension'. DELETE Allows the user to delete a row from the specified table. UPDATE Allows the user to UPDATE records in the specified table. 'REFERENCES').SQL Reference Manual INSERT Allows the user to INSERT records into the specified table and to use the COPY (page 323) command to load the table. session_user -------------dbadmin -236- . Notes All arguments must be quoted string constants (page 57).store_dimension'. 'UPDATE'). REFERENCES Allows the user to create a foreign key constraint (privileges required on both the referencing and referenced tables). has_table_privilege --------------------t (1 row) SELECT HAS_TABLE_PRIVILEGE('store. Syntax SESSION_USER() Notes • • The SESSION_USER function does not require parentheses. Examples SELECT HAS_TABLE_PRIVILEGE('store.

current_user -------------dbadmin (1 row) VERSION Returns a VARCHAR containing a Vertica node's version information. Syntax USER() Notes • • The USER function does not require parentheses. Examples SELECT USER(). Syntax VERSION() Examples SELECT VERSION().0. session_user -------------dbadmin (1 row) USER Returns a VARCHAR containing the name of the user who initiated the current database connection. Is equivalent to CURRENT_USER (page 235).0-20090407010008 (1 row) -237- . current_user -------------dbadmin (1 row) The following command returns the same results without the parentheses: SELECT USER.SQL Functions (1 row) The following command returns the same results without the parentheses: SELECT SESSION_USER. version ------------------------------------------------Vertica Analytic Database v3.

tableSpecification ) Parameters design_context_name table_specification Specifies the name of the design context schema in which to add the tables. Use a comma-delimited list to specify tables.*'). ADD_DESIGN_TABLES Identifies tables for which to create projections.]table_name where schema_name is the schema that contains the table for which to create a projection. SELECT ADD_DESIGN_TABLES('vmart'. If a design context contains more than one design configuration. The following example adds the store_orders_fact and store_sales_fact tables in the store schema to the vmart design context. Database Designer creates projections for these tables.''). as follows: To specify a specific table. when loaded.store_orders_fact. See SET_DESIGN_SEGMENTATION_TABLE (page 306) for information about segmenting tables.'public. Vertica will automatically assume that the number of rows for each table will match the data statistics. use the SET_DESIGN_TABLE_ROWS (page 306) function to specify the approximate number of rows for the table. Syntax ADD_DESIGN_TABLES ( design_context_name .238 Vertica Functions The functions in this section are specific to the Vertica database. The following example adds all the tables in the database to the vmart design context. If the number of rows in a table will differ by an order of magnitude at implementation. use the form: schema_name.* Notes • • None of these tables are marked for segmentation when they are added to the design context. To specify all the tables in a particular schema. use the form: [schema_name. -238- . these tables are used by all the design configurations within the design context.'store. • Examples The following example adds all the tables in the public schema to the vmart design context. SELECT ADD_DESIGN_TABLES('vmart'.store_sales_fact'). Specifies the names of one or more tables to add to the design.store. SELECT ADD_DESIGN_TABLES('vmart'.

'DATA'). DATA. node defaults to the initiator. Syntax ADD_LOCATION ( path . which could help improve I/O performance.TEMP: Both types of files are stored in the location. or all. SET_DESIGN_TABLE_ROWS (page 306). 'node2' . [ usage_string ] ) Parameters path Specifies where the storage location is mounted. node usage_string Notes • • • By default. TEMP: Only temporary files that are created during loads or queries are stored in the location. ROS containers) for catalog storage locations. The DBA can specify the resource type (temp files. and SET_DESIGN_SEGMENTATION_TABLE (page 306) ADD_LOCATION Adds a location to store data. This example adds a location to store data only: SELECT ADD_LOCATION('/secondVerticaStorageLocation/' . group. 'node2'). Path must be an empty directory with write permissions for user. Example This example adds a location that stores data and temporary files: SELECT ADD_LOCATION('/secondVerticaStorageLocation/' . [Optional] Is the Vertica node where the location is available. [ node ] . If this parameter is omitted.SQL Functions See Also CLEAR_DESIGN_TABLES (page 249). Locations can be added from any node to any node. See Also ALTER_LOCATION_USE and RETIRE_LOCATION (page 294) -239- . Is one of the following: DATA: Only data is stored in the location. the location is used to store both data and temporary files.

node defaults to the initiator. DATA. -240- . all currently running statements that use these temp files. Use ADVANCE_EPOCH immediately before using ALTER PROJECTION MOVEOUT. These files can be stored in the same storage location or separate storage locations. § If you modify a storage location that previously stored both temp and data files so that it only stores data files. Syntax ALTER_LOCATION_USE ( path . TEMP: Only temporary files that are created during loads or queries are stored in the location. it stores only the type of information indicated from that point forward. You can also merge it out manually. When a storage location is altered. For example: § If you modify a storage location that previously stored both temp and data files so that it only stores temp files. Subsequent statements will no longer use this location.SQL Reference Manual ADVANCE_EPOCH Manually closes the current epoch and begins a new epoch. If this parameter is omitted.TEMP: Both types of files are stored in the location. [ node ] . such as queries and loads. Syntax SELECT ADVANCE_EPOCH() See Also ALTER PROJECTION (page 314) ALTER_LOCATION_USE Alters the type of files stored in the specified storage location. After modifying the location's use. usage_string ) Parameters path node Specifies where the storage location is mounted. at least one location must remain for storing data and temp files. the data is eventually merged out through the ATM. [Optional] Is the Vertica node where the location is available. usage_string Notes • • • Altering the type of files stored in a particular location is useful if you create additional storage locations and you want to isolate execution engine temporary files from data files. will continue to run. Is one of the following: DATA: Only data is stored in the location.

and the column values that caused the violation. Vertica checks for constraint violations when queries are executed.SQL Functions Example The following example alters the storage location on node3 to store data only: SELECT ALTER_LOCATION_USE ('/thirdVerticaStorageLocation/' . ANALYZE_CONSTRAINTS() fails if the database cannot perform constraint checks. See Also ADD_LOCATION (page 239). 'DATA').table ) | [ ( schema. RETIRE_LOCATION (page 294). 'product_key'). for example: SELECT ANALYZE_CONSTRAINTS ('public. load data without committing it and then perform a post-load check of your data using the ANALYZE_CONSTRAINTS function. Return Values ANALYZE_CONSTRAINTS() returns results in a structured set (see table below) that lists the schema name. column ) ] Parameters ('') table column Analyzes and reports on all tables within the current schema search path. 'node3' . Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -241- . Notes Use COPY with NO COMMIT keywords to incorporate detection of constraint violations into the load process. and Modifying Storage Locations ANALYZE_CONSTRAINTS Analyzes and reports on constraint violations within the current schema search path. To avoid constraint violations. Analyzes and reports on all constraints referring to specified table that contains the specified columns. If the function finds constraint violations. table name. by passing a single table argument. constraint type. You can check for constraint violations by passing an empty argument (which returns violations on all tables within the current schema). Analyzes and reports on all constraints referring to the specified table. column name. then no constraint violations exist. If the result set is empty. constraint name.table . or by passing two arguments comprising a table and a column or list of columns. Syntax SELECT ANALYZE_CONSTRAINTS [ ( '' ) | ( schema. you can roll back the load because you have not committed it. Vertica returns an error that identifies the specific condition that caused the failure. such as when the system is out of resources.product_dimension'. not when data is loaded.

for example: ('1'). along with the value that caused the violation: SELECT ANALYZE_CONSTRAINTS ('').SQL Reference Manual -------------+------------+--------------+-----------------+-----------------+--------------(0 rows) The following result set. ('1'. takes locks in the same way that SELECT * FROM t1 holds a lock on table t1. if specified. 'z') Constraint Name Constraint Type VARCHAR VARCHAR Column Values VARCHAR Locks ANALYZE_CONSTRAINTS(). or 'NOT NULL'. date_key. store_key. Value of the constraint column. The following table describes the locks taken by ANALYZE_CONSTRAINTS: Transaction Mode Locks Acquired -242- . 'UNIQUE'. Identified by one of the following strings: 'PRIMARY KEY'. When interpreted as SQL. Names of columns containing constraints. The name of the table. shows a primary key violation. the value of this column forms a list of values of the same type as the columns in Column Names. The given name of the primary key. 'FOREIGN KEY'. Multiple columns are in a comma-separated list: store_key. or not null constraint. foreign key. on the other hand. unique. in the same order in which Column Names contains the value of that column in the violating row. if specified. Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------store t1 c1 pk_t1 PRIMARY ('10') (1 row) The result set columns are described in further detail in the following table: Column Name Schema Name Table Name Column Names Data Type VARCHAR VARCHAR VARCHAR Description The name of the schema.

Disabling Duplicate Key Errors When ANALYZE_CONSTRAINTS finds violations. If concurrent parallel loads are important. None. INSERT INTO t1 values (10). INSERT INTO t1 values (10). you can correct errors using the following functions. IMPLEMENT_TEMP_DESIGN(''). CREATE PROJECTION t1_p (c1) AS SELECT * FROM t1 UNSEGMENTED ALL NODES. c2 INTEGER UNIQUE). do not use ANALYZE_CONSTRAINTS with SERIALIZABLE isolation level. In READ COMMITTED mode. Effects last until the end of the session only: • • SELECT DISABLE_DUPLICATE_KEY_ERROR (page 261) SELECT REENABLE_DUPLICATE_KEY_ERROR (page 292) Examples Given the following sample inputs. or three violations. the database's ability to perform other concurrent loads while those locks are held could be severely impaired. ALTER TABLE t1 ADD CONSTRAINT pk_t1 PRIMARY KEY (c1). READ COMMITTED Note: If ANALYZE_CONSTRAINT is run at the READ COMMITTED isolation level. --Primary key (PK) and unique key violation INTO fact0 values (3). the result set contains one violation because the same primary key value (10) was inserted into table t1 twice: CREATE TABLE t1(c1 INT). -243- . --Creates the required projections INTO fact0 values (1. For example. so any uncommitted duplicates within the current transaction are detected. --Foreign key (FK) violation ANALYZE_CONSTRAINTS(''). --Duplicate primary key value SELECT ANALYZE_CONSTRAINTS(''). Thus. a query sees changes in the current transaction. TABLE dim0 (c1 INTEGER REFERENCES fact0(c1)). and one row for the foreign key violation (missing primary key): CREATE CREATE SELECT INSERT INSERT SELECT TABLE fact0(c1 INTEGER PRIMARY KEY. 2). one row each for the primary key and unique key violation. the function might not find duplicates across concurrent transactions or duplicates that were committed by another transaction in the current epoch. Caution: ANALYZE_CONSTRAINTS in SERIALIZABLE mode results in S (read) locks on all tables involved in all constraints analyzed. if a FOREIGN KEY constraint is analyzed. such as when you insert a duplicate value into a primary key. an S lock is acquired on both the table with the FOREIGN KEY and the table with the corresponding PRIMARY KEY. Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------store t1 c1 pk_t1 PRIMARY ('10') (1 row) This example returns three rows.SQL Functions SERIALIZABLE (default) S (read) locks on all tables involved in all constraints analyzed.

The system shows two violations: one against the primary key and one against the unique key: INSERT INTO fact_1 VALUES (1. Now issue the ANALYZE_CONSTRAINTS command on table fact_1. 1). one a unique key and one a primary key: CREATE TABLE fact_1(f INTEGER. create a table that contains 3 integer columns. No constraint violations are expected and none are found: SELECT ANALYZE_CONSTRAINTS('fact_1'). Note: The empty string in the following code example creates a temporary physical schema design (projections) for any table in the database that lacks K-safe projections: SELECT IMPLEMENT_TEMP_DESIGN('').SQL Reference Manual Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------public fact0 c1 PRIMARY KEY ('1') public fact0 c1 UNIQUE ('3') public fact0 c2 FOREIGN KEY ('2') (3 rows) If you specify the wrong table. 1). Or ANALYZE CONSTRAINT abc. the system returns an error message with a hint: ANALYZE ALL CONSTRAINT. COMMIT. ERROR: ANALYZE CONSTRAINT is not supported. ERROR: 'fee' is not a table name in the current search path Insert some values into table fact_1 and commit the changes: INSERT INTO fact_1 values (1. Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------(0 rows) Now insert duplicate unique and primary key values and run ANALYZE_CONSTRAINTS on table fact_1 again. 'f2'). the system returns an error message: SELECT ANALYZE_CONSTRAINTS('abc'). Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------public | fact_1 | f_pk | | PRIMARY | ('1') public | fact_1 | f_uk | | UNIQUE | ('1') -244- . HINT: You may consider using analyze_constraints(). implement_temp_design ----------------------4 (1 row) Try issuing a command that refers to a nonexistent column: SELECT ANALYZE_CONSTRAINTS('fee'. COMMIT. In this example. f_UK INTEGER UNIQUE. 1. 1. Issue the command to create superprojections. SELECT ANALYZE_CONSTRAINTS('fact_1'). ERROR: 'abc' is not a table in the current search path If you issue the function using incorrect syntax. f_PK INTEGER PRIMARY KEY).

Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+ --------------public | dim_1 | b_fk | | FOREIGN | ('0') (1 row) Now add a duplicate value into the unique key and commit the changes: INSERT INTO dim_1 values ('r2'. COMMIT. 'Xpk1') public | dim_1 | b_fk | | FOREIGN | ('0') (2 rows) Now create a table with multicolumn foreign key and create the superprojections: CREATE TABLE dim_2(z_fk1 VARCHAR(3). and inserts a foreign key and different (character) data types: CREATE TABLE dim_1 (b VARCHAR(3). COMMIT. b_PK VARCHAR(4). z_fk2) REFERENCES dim_1(b. z_fk2 VARCHAR(4)). Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------public | fact_1 | f_uk | | UNIQUE | ('1') (1 row) The following example shows that you can specify the same column more than once. 'f_PK. 0). Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+---------------public | dim_1 | b. the function. 'f_UK'). Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------public | fact_1 | f_pk | | PRIMARY | ('1') (1 row) The following example creates a new dimension table. 1). The following command inserts a missing foreign key (0) in table dim_1 and commits the changes: INSERT INTO dim_1 VALUES ('r1'. however. Checking for constraint violations on table dim_1 detects the duplicate unique key error: SELECT ANALYZE_CONSTRAINTS('dim_1'). 'Xpk1'. dim_1.SQL Functions (2 rows) The following command specifies constraint validation on only the unique key in table fact_1: SELECT ANALYZE_CONSTRAINTS('fact_1'. b_FK INTEGER REFERENCES fact_1(f_PK)). Checking for constraints on table dim_1 detects a foreign key violation: SELECT ANALYZE_CONSTRAINTS('dim_1'). b_PK). b_pk | dim_1_multiuk | PRIMARY | ('r1'. select implement_temp_design(''). INSERT INTO dim_1 values ('r1'. 'Xpk1'. b_PK). SELECT IMPLEMENT_TEMP_DESIGN(''). Alter the table to create a multicolumn unique key and multicolumn foreign key and issue the command that creates the superprojections: ALTER TABLE dim_1 ADD CONSTRAINT dim_1_multiuk PRIMARY KEY (b. -245- . 1). ALTER TABLE dim_2 ADD CONSTRAINT dim_2_multifk FOREIGN KEY (z_fk1. 'Xpk1'. returns the violation once only: SELECT ANALYZE_CONSTRAINTS('fact_1'. F_PK').

z_fk2 | dim_2_multifk | FOREIGN | ('r1'. DROP TABLE dim_2 cascade.expect FK violation Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+---------------public | dim_2 | z_fk1. To understand about removing violating rows. Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------public | dim_1 | b. z_fk2 | dim_2_multifk | FOREIGN | ('r1'. DROP TABLE dim_1 cascade. 'NONE') (1 row) Now analyze all constraints on all tables: SELECT ANALYZE_CONSTRAINTS('').SQL Reference Manual Now insert a foreign key that matches a foreign key in table dim_1 and commit the changes: INSERT INTO dim_2 VALUES ('r1'. Schema Name | Table Name | Column Names | Constraint Name | Constraint Type | Column Values -------------+------------+--------------+-----------------+-----------------+--------------(0 rows) Add a value that does not match and commit the change: INSERT INTO dim_2 values ('r1'. Check for constraints on table dim_2 detects a foreign key violation: SELECT ANALYZE_CONSTRAINTS('dim_2'). 'Xpk1'). Checking for constraints on table dim_2 detects no violations: SELECT ANALYZE_CONSTRAINTS('dim_2'). 'NONE') public | fact_1 | f_pk | | PRIMARY | ('1') public | fact_1 | f_uk | | UNIQUE | ('1') (5 rows) To quickly clean up your database. COMMIT. -. See Also Adding Primary Key and Foreign Key Constraints in the Administrator's Guide COPY (page 323) NO COMMIT CREATE TABLE (page 346) and ALTER TABLE (page 316) ADD CONSTRAINT -246- . 'Xpk1') public | dim_1 | b_fk | | FOREIGN | ('0') public | dim_2 | z_fk1. 'NONE'). COMMIT. b_pk | dim_1_multiuk | PRIMARY | ('r1'. see the DISABLE_DUPLICATE_KEY_ERROR (page 261) function. issue the following command: DROP TABLE fact_1 cascade.

Collects statistics for all projections. analyze_statistics -------------------0 (1 row) The following command computes statistics on the shipping_dimension table and returns 0 (success): SELECT ANALYZE_STATISTICS ('shipping_dimension'). Syntax SELECT ANALYZE_STATISTICS { ( '' ) | ( '[schema. specify the schema that contains the projection. Specifies the name of the table and optional schema.]tab le Empty string.SQL Functions ANALYZE_STATISTICS Collects and aggregates data samples and storage information as a background process from all nodes on which a projection is stored. the query optimizer would assume uniform distribution of data values and equal storage usage for all projections. Refer to vertica. When using more than one schema. projection Notes Issuing the command against very large tables/projections could return results more slowly.For failure. then writes statistics into the catalog so that they can be used by the query optimizer. Example The following example computes statistics on all projections in the database and returns 0 (success): SELECT ANALYZE_STATISTICS (''). Collects statistics for all projections of the specified table. Collects statistics for the specified projection. Specifies the name of the projection. Without these statistics. analyze_statistics -------------------0 (1 row) The following command computes statistics on one of the shipping_dimension table's projections and returns 0 (success): SELECT ANALYZE_STATISTICS('shipping_dimension_site02'). 1 . Parameters '' [schema. analyze_statistics -247- .]table' ) | ( 'projection' ) } Return Value • • 0 .For success.log for details.

It does not roll back any projection that it has already deployed and refreshed. so you cannot use INTERRUPT_STATEMENT (page 279) to cancel those statements. use CANCEL_REFRESH to cancel statements that are run by refresh-related internal sessions. and REVERT_DEPLOYMENT (page 295) Once you've started a deployment through the standard API. as indicated by the deploy_status "duplicate". you must have permission to access the design context in order to cancel the associated deployment. RUN_DEPLOYMENT (page 296). Instead. Syntax CANCEL_DEPLOYMENT() Notes • • • Once you've started a deployment through the standard API. CANCEL_REFRESH Cancels refresh operations initiated by START_REFRESH(). § Modify and deploy the design from scratch. When you cancel a deployment. Once you cancel a design. Use the RUN_DEPLOYMENT function. Database Designer cancels the projection refresh operation. Syntax CANCEL_REFRESH() Notes • Refresh tasks run in a background thread in an internal session. The deployment will continue where it left off.SQL Reference Manual -------------------0 (1 row) CANCEL_DEPLOYMENT Cancels a design deployment. When you cancel a deployment. -248- . § Use the DEPLOY_DESIGN function to re-deploy the design. Database Designer cancels the projection refresh operation. you must have permission to access the design context to cancel an associated deployment. See Also DEPLOY_DESIGN (page 259). you have three options: § Complete the deployment process. Database DESIGNER will not refresh any projections which are already up-to-date.

SQL Functions • • • Execute CANCEL_REFRESH() on the same node on which START_REFRESH() was initiated. and returns SUCCESS. See Also ADD_DESIGN_TABLES (page 238). Note By default. design_name ) Parameters design_context_name design_name Specifies the name of the design context schema to modify. SELECT CLEAR_DESIGN_TABLES('vmart'). CANCEL_REFRESH() cancels the refresh operation running on a node. Syntax CLEAR_DESIGN_TABLES ( design_context_name ) Notes This is useful for quickly removing the entire list. See Also INTERRUPT_STATEMENT (page 279). so you can modify it. and VT_SESSION (page 489) CLEAR_DESIGN_SEGMENTATION_TABLE If specified. Specifies the name of the design configuration to modify. removes all information regarding which columns to use to segment tables. Syntax CLEAR_DESIGN_SEGMENTATION_TABLE ( design_context_name . and REMOVE_DESIGN_CONTEXT (page 292) -249- . Database Designer determines which columns to use to segment tables across all nodes. this function clears the information you specified. Example The following drops the list of tables for which to develop projections from the vmart design context schema. If you used the SET_DESIGN_SEGMENTATION_COLUMN (page 305) function to determine how tables are segmented. CLEAR_DESIGN_TABLES Removes the list of tables for which Database designer was going to create projections from the design context. VT_PROJECTION_REFRESH (page 482). START_REFRESH (page 308). waits for the cancellation to complete. Only one set of refresh operations executes on a node at any time. REMOVE_DESIGN (page 292).

Syntax CLOSE_ALL_SESSIONS() Notes Closing of the sessions is processed asynchronously. See Also Collecting Query Information CLOSE_ALL_SESSIONS Closes all external sessions except the one issuing the CLOSE_ALL_SESSIONS functions. it will use the value of the QueryRepoRetentionTime parameter to determine the number of days worth of query data to retain. Examples 2 user sessions opened. last_stmt | select * from sessions.) stmt_start | 2008-03-26 21:57:59 stmtid | 0 last_stmt_duration | 1 current_stmt | select * from sessions.0. -250- . It might take some time for the session to be closed.) The CLEAR_QUERY_REPOSITORY function resets the clock for the CleanQueryRepoInterval back to zero (0). Check the SESSIONS (page 458) table for the status Message close_all_sessions | Close all sessions command sent.SQL Reference Manual CLEAR_QUERY_REPOSITORY Triggers Vertica to clear query data from the query repository immediately. -[ RECORD 1 ] timestamp | 2008-03-26 21:57:59 node_name | site01 username | release client | 127. (See Configuring Query Repository. For example. each on a different node => select * from sessions. Syntax CLEAR_QUERY_REPOSITORY() Notes • • Vertica clears data based on established query repository configuration parameters.0. Check SESSIONS for progress.1:55642 login_time | 2008-03-26 21:43:50 sessionid | rhel4-1-26555:0x14a26:534149977 txn_start | 2008-03-26 21:44:06 txnid | 45035996273734813 txn_descript | user release (select * from sessions.

sessions contents after close_all_sessions() => select * from sessions.SQL Functions -[ RECORD 2 ] timestamp | 2008-03-26 21:57:59 node_name | site01 username | release client | 10.0.242. last_stmt | close_all_sessions => select close_all_sessions(). -[ RECORD 1 ] timestamp | 2008-03-26 22:00:25 node_name | site01 username | release client | 127. close_all_sessions | Close all sessions command sent.0.tbl' DELIMITER '|' NULL '\\n' DIRECT.tbl' DELIMITER '|' NULL '\\n'.8:55652 login_time | 2008-03-26 21:57:42 sessionid | rhel4-1-26555:0x150bb:1059190338 txn_start | 2008-03-26 21:57:42 txnid | 45035996273734822 txn_descript | user release (COPY ClickStream_Fact FROM '/data/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT.) stmt_start | 2008-03-26 22:00:25 stmtid | 0 last_stmt_duration | 557 current_stmt | select * from sessions.) stmt_start | 2008-03-26 21:57:42 stmtid | 17179869186 last_stmt_duration | 0 current_stmt | COPY ClickStream_Fact FROM '/data/ClickStream_Fact.242.62. last_stmt | -[ RECORD 3 ] timestamp | 2008-03-26 21:57:59 node_name | site01 username | release client | 10.tbl' DELIMITER '|' NULL '\\n'. Check sessions for progress.8:55659 login_time | 2008-03-26 21:57:45 sessionid | rhel4-1-26555:0x150c3:304606079 txn_start | 2008-03-26 21:57:45 txnid | 45035996273734823 txn_descript | user release (COPY ClickStream_Fact FROM '/data/ClickStream_Fact. -251- .1:55642 login_time | 2008-03-26 21:43:50 sessionid | rhel4-1-26555:0x14a26:534149977 txn_start | 2008-03-26 21:44:06 txnid | 45035996273734813 txn_descript | user release (select * from sessions.62.) stmt_start | 2008-03-26 21:57:46 stmtid | 17179869187 last_stmt_duration | 0 current_stmt | COPY ClickStream_Fact FROM '/data/ClickStream_Fact.

Error: invalid Session ID format For a badly formatted sessionID Error: Invalid session ID or statement key For an incorrect sessionID parameter Examples User session opened. -[ RECORD 1 ] current_timestamp | node_name | user_name | client_hostname | login_timestamp | session_id | transaction_start | transaction_id | transaction_description | statement_start | statement_id | last_stmt_duration | 2008-04-01 14:53:31 site01 release 127. if any. Messages close_session | Session close command sent.SQL Reference Manual last_stmt | select close_all_sessions(). CLOSE_SESSION Interrupts the specified external session and rolls back the current transaction. It could take some time for the session to be closed.0. Check the SESSIONS (page 458) table for the status. Syntax CLOSE_SESSION( sessionid ) Parameters sessionid A string that specifies the session to close. and closes the socket. This identifier is unique within the cluster at any point in time but can be reused when the session closes.1:57141 2008-04-01 14:41:26 rhel4-1-30361:0xd7e3e:994462853 2008-04-01 14:48:54 45035996273741092 user release (select * from SESSIONS.) 2008-04-01 14:53:31 0 1 -252- . Notes Closing of the session is processed asynchronously. => SELECT * FROM SESSIONS. RECORD 2 shows the user session running COPY DIRECT statement.0. Check SESSIONS for progress.

0.0. -253- . Session rhel4-1-30361:0xd83ac:1017578618 closed.0. -[ RECORD 1 ] close_session | Session close command sent. no longer displayed in SESSIONS => SELECT * FROM SESSIONS.tbl' DELIMITER '|' NULL '\\n' DIRECT. | | | | | | | | | statement_start statement_id last_statement_duration current_statement | | | | 2008-04-01 14:53:31 site01 release 127.tbl' DELIMITER '|' NULL '\\n' DIRECT. | select * from SESSIONS.) statement_start | 2008-04-01 14:54:11 statement_id | 0 last_statement_duration | 98 current_statement | select * from SESSIONS. Check SESSIONS for progress. See Also SESSION (page 458) CONFIGURE_DEPLOYMENT Analyzes a design for deployment. but does not deploy it.1:57142 2008-04-01 14:52:55 rhel4-1-30361:0xd83ac:1017578618 2008-04-01 14:53:26 45035996273741096 user release (COPY ClickStream_Fact FROM '/data/clickstream/1g/ClickStream_Fact. last_statement | select close_session('rhel4-1-30361:0xd83ac:1017578618').1:57141 login_timestamp | 2008-04-01 14:41:26 session_id | rhel4-1-30361:0xd7e3e:994462853 transaction_start | 2008-04-01 14:48:54 transaction_id | 45035996273741092 transaction_description | user release (select * from SESSIONS. -[ RECORD 1 ] current_timestamp | 2008-04-01 14:54:11 node_name | site01 user_name | release client_hostname | 127.0.) 2008-04-01 14:53:26 17179869528 0 COPY ClickStream_Fact FROM '/data/clickstream/1g/ClickStream_Fact. last_statement | Close user session rhel4-1-30361:0xd83ac:1017578618 => SELECT CLOSE_SESSION('rhel4-1-30361:0xd83ac:1017578618').SQL Functions current_statement last_statement -[ RECORD 2 ] current_timestamp node_name user_name client_hostname login_timestamp session_id transaction_start transaction_id transaction_description | select * from SESSIONS.

If a duplicate of projection to be added already exists in the catalog. use the RUN_DEPLOYMENT (page 296) function. Vertica displays an error indicating that you must regenerate the design. Note: If there is a naming conflict between a new projection and one already in the catalog. if one does not exist. Either creating a <designName>_deployment_projections table. if the table already exists. CONFIGURE_DEPLOYMENT prepares a design for deployment by: • • • • -254- . This occurs because there is no deployment system table for the design. Placing a record in the <designName>_deployment_projections table for each projection to be added to or dropped from the deployment. § Segmentation offset (none for unsegmented). to track the status of every projection in the design (for new designs) or.0. Note: If you attempt to configure a deployment for a design created in Vertica version 3. Notes Instructing Database Designer to create a deployment implementation without actually deploying the design is useful if you want to: • • Create a test design. See Deploying Test Designs. Setting the deployment status of the design to pending. it is given a status of duplicate and Database Designer does not redeploy it.SQL Reference Manual Syntax CONFIGURE_DEPLOYMENT ( design_context_name . Database Designer determines duplicate projections if they are anchored on the sample table as the projection to be added and have the same: § Table columns with same encoding § Sort columns in the same order § Segmentation nodes. Specifies the name of the design to analyze. the name of the deployed projection is modified. Use either CREATE_DESIGN (page 255) or UPDATE_DESIGN (page 310). Analyzing the design to be deployed and determining its specific projection implementation. See the design_<design_name>_deployment view. clearing it of existing projection entries. § Segmentation expression (none for unsegmented). design_name ) Parameters design_context_na me design_name Specifies the name of the design context schema that contains the design to analyze. When you're ready to deploy the design. See Preserving Existing Projections During Deployment. Prevent specific projections from being dropped during deployment.

'VMartDesign'). RUN_DEPLOYMENT (page 296). See Also UPDATE_DESIGN (page 310). Specifies the name of the design configuration for which to generate a design. An optimized design if the query input table is populated. but does not deploy it. design__name ) Parameters design_context_name design_name Specifies the name of the design context schema for which to generate a design.'VMartDesign'). CANCEL_DEPLOYMENT (page 248) and REVERT_DEPLOYMENT (page 295) CREATE_DESIGN Generates a new physical schema design. REMOVE_DESIGN (page 292) CREATE_DESIGN_CONFIGURATION Creates a design configuration with the specified name. Example The following example creates a design for the VMartDesign configuration within the vmart design context: SELECT CREATE_DESIGN('vmart'. design_name ) -255- . See Also DEPLOY_DESIGN (page 259). Syntax CREATE_DESIGN ( design_context_name .SQL Functions Example The following example analyzes the design (VMartDesign) from the vmart context. Syntax CREATE_DESIGN_CONFIGURATION ( design_context_name . Notes CREATE_DESIGN creates one of the following designs: • • A basic design if no query input table exists or it is empty. SELECT CONFIGURE_DEPLOYMENT('vmart'.

This enables the user to create. See Also CREATE_DESIGN_CONTEXT (page 256) CREATE_DESIGN_CONTEXT Creates a design context schema with the specified name. Notes This statement creates a design configuration within the context schema specified. It does not give the user the ability to drop the context. -256- . it creates system tables to store the design configuration. Grants USAGE privileges on the design context schema to the user specified. See Required Privileges for Creating Designs. Specifically. and drop designs within the context schema. Specifies the name of the design configuration to create. specify an existing user_name. Syntax CREATE_DESIGN_CONTEXT ( design_context_name . Example The following example creates a design configuration called VMartDesign in the vmart design context: SELECT CREATE_DESIGN_CONFIGURATION('vmart'. the database administrator has all privileges on the design context schema.'VMartDesign'). [ user_name ] ) Parameters design_context_name user_name Specifies the name of the design context schema to be created. To grant USAGE privileges on the design context schema. modify. Notes • • Only the database administrator can use the CREATE_DESIGN_CONTEXT statement. Design names should be no longer than 64 characters long. SELECT CREATE_DESIGN_CONTEXT('vmart'). By default. Example The following creates a design context schema for the VMart demo database.SQL Reference Manual Parameters design_context_na me design_name Specifies the name of the design context schema in which to create the design.

.. SELECT CREATE_DESIGN_QUERIES_TABLE('vmart_query_input'). Example The following example creates a query input table and names it vmart_query_input..large_tables . . Specifies the name of the query input table that contains the queries you want to use to optimize the design.. Design names should be no longer than 64 characters long. . . The default value for this parameter is ".[ query_table_name .int_k_safety_value ] ) Parameters design_name Specifies the name of the design context schema and design configuration to be created..design_[schema.. and establish it as the query input table for the specified design. See Also SET_DESIGN_QUERIES_TABLE (page 303) and RESET_DESIGN_QUERIES_TABLE (page 293) CREATE_PROJECTION_DESIGN Creates the specified physical schema design.. load it with data from the query repository. Notes This function is used in conjunction with LOAD_DESIGN_QUERIES (page 282) and SET_DESIGN_QUERIES_TABLE (page 303) to create a query input table...SQL Functions See Also REMOVE_DESIGN_CONTEXT (page 292) CREATE_DESIGN_QUERIES_TABLE Creates a query input table with the name you specify..]tables ... . Syntax CREATE_PROJECTION_DESIGN ( design_name . Syntax CREATE_DESIGN_QUERIES_TABLE ( query_table_name ) Parameters query_table_name Specifies the name of the query input table to create. If you do not specify query_table_name -257- ...

See K-Safety for an overview. To create a file that contains the script. If you do not specify this parameter. or 2. Use a comma-delimited list to specify tables. but does not implement it. the value of K defaults to 0 for database clusters that contain fewer than three nodes or 1 for database clusters that contain three or more nodes. The value of K can be one 1 or two 2 only when the physical schema design meets certain requirements. Database Designer creates a basic design. 1. in the designer log file (designer.SQL Reference Manual this parameter. Be sure to specify only tables that are specified as design tables in the design_[schema. use the GET_DESIGN_SCRIPT (page 275) function to create a SQL script which you can redirect to a file and run to create projections. Database Designer creates projections for all user tables in the database. use the \i meta-command in vsql to execute the file. This function returns the number of tables for which it created projections.log). To implement the design. large_tables int_k-safety_value Notes • This function creates a design.]table_name Where schema_name is the schema that contains the table for which to create a segmented projection. use the form: [schema_name. To specify all the tables in a particular schema. Segmented projections are used for large tables (typically fact tables).* The default value for this parameter is ". use the form: schema_name. To deploy the projections. use the form: [schema_name. which is located in the same directory as vertica. design_[schema. By default: -258- . Specifies tables for which to create segmented projections. It also logs details of the design process. such as a summary of the projections it created. If you do not specify this parameter.log. To specify a specific table. as follows: To specify a specific table. If you do not specify this parameter. The default value for this parameter is ".]table_name where schema_name is the schema that contains the table for which to create a projection. • Examples • The following example creates a basic design named VMart_Schema within a design context named VMart_Schema. Sets the K-safety value to 0. Database Designer does not create any segmented projections.]tables Specifies the tables for which to create projections.]tables parameter. use \o to redirect the results to a file and \t to show only tuples (no headings). Use a comma-delimited list to specify tables.

§ The value of K-Safety is one (1). '"'.store_orders_fac t.store_sales_fact. None of the projections are segmented.store_orders_fact. • SELECT CREATE_PROJECTION_DESIGN('VMart_Schema').store. SELECT CREATE_PROJECTION_DESIGN('VMart_Schema'.*' are the tables for which to create the design.store_sales_fact. § 'public.store. In this case it's all the tables in the public and store schemas. § None of the projections are segmented. • The following example creates an optimized design named VMart_Opt within a design context named VMart_Opt.inventory_fact. Syntax DEPLOY_DESIGN ( design_context_name . By default: § The design contains projections for all the tables in the database. 'public. See Also GET_DESIGN_SCRIPT (page 275) DEPLOY_DESIGN Analyzes a design and then deploys it.online_s ales_fact' are the tables to segment. The design will be optimized for the queries stored in the public. § 'public. SELECT CREATE_PROJECTION_DESIGN('VMart_Opt'.* and store.store. No queries are provided. § 0 is the K-safety value of the database. online_sales.online_sales.*.online_sales_fact'. Were: § '"' instructs Database Designer to a basic design. Typically fact and large dimension tables are segmented across all nodes within the database cluster. This improves the efficiency of the database.inventory_fact. The value of K-Safety is one (1).store. 'public.*'.myQueries').store. The following example creates a basic design named VMart_Schema within a design context named VMart_Schema.SQL Functions § § § The design contains projections for all the tables in the database. '0').myQueries query input file. design_name ) -259- . 'public.

Note: If there is a naming conflict between a new projection and one already in the catalog. • Finally. This occurs because there is no deployment system table for the design. SELECT DEPLOY_DESIGN('vmart'. to track the status of every projection in the design (for new designs) or. If a projection to be added already exists in the deployment (catalog). Specifies the name of the design to analyze. clearing it of existing projection entries. DEPLOY_DESIGN calls RUN_DEPLOYMENT (page 296) to implement the projections. the name of the new projection is changed in the design repository and in the <designName>_deployment_projections table. if one does not exist. A column already exists in the deployment catalog if it is anchored on the sample table as the projection to be added and has the same: § Table columns with same encoding § Sort columns in the same order § Segmentation nodes. § Segmentation expression (none for unsegmented).0. Notes DEPLOY_DESIGN prepares a design for deployment by: • • • • Setting the deployment status of the design to pending. -260- .'VMartDesign'). if the table already exists.SQL Reference Manual Parameters design_context_na me design_name Specifies the name of the design context schema that contains the design to analyze. Vertica displays an error indicating that you must regenerate the design. Note: If you attempt to deploy a design created in Vertica version 3. § Segmentation offset (none for unsegmented). See the design_<design_name>_deployment view. Analyzing the design to be deployed and determining its specific projection implementation. Placing a record in the <designName>_deployment_projections table for each projection to be added to or removed from the deployment. Either creating a <designName>_deployment_projections table. Example The following example analyzes the design (VMartDesign) from the vmart context and then calls RUN_DEPLOYMENT to deploy it. Use either CREATE_DESIGN (page 255) or UPDATE_DESIGN (page 310). it is given a status of deployed and Database Designer does not redeploy it.

x integer). values (1. Then correct the violations and turn integrity checking back on with REENABLE_DUPLICATE_KEY_ERROR (page 292)(). dim WHERE pk=fk ORDER BY x. first save the original dim rows that match the duplicated primary key.SQL Functions See Also RUN_DEPLOYMENT (page 296). Queries execute as though no constraints are defined on the schema. CANCEL_DEPLOYMENT (page 248) and REVERT_DEPLOYMENT (page 295) DISABLE_DUPLICATE_KEY_ERROR Disables error messaging when Vertica finds duplicate PRIMARY KEY/UNIQUE KEY values at runtime. CREATE PROJECTION prejoin_p (fk. values (1.2).1). CREATE TEMP TABLE dim_temp(pk integer. Any attempt to delete the record results in the following error message: ROLLBACK: 1 Duplicate primary key detected in FK-PK join Hash-Join (x dim_p). Effects are session scoped. 1 To remove the violation. use the following sequence of commands. Syntax SELECT DISABLE_DUPLICATE_KEY_ERROR(). value In order to remove the constraint violation (pk=1). Notice the last statement inserts a duplicate primary key value of 1: INSERT INTO dim INSERT INTO dim INSERT INTO dim COMMIT. CREATE PROJECTION dim_p (pk. Usage The following sample statements create dimension table dim and the corresponding projection: CREATE TABLE dim (pk INTEGER PRIMARY KEY. The following statements load values into table dim. x) AS SELECT * FROM fact. which puts the database back into the state just before the duplicate primary key was added. CAUTION: When called. --Constraint violation Table dim now contains duplicate primary key values. DISABLE_DUPLICATE_KEY_ERROR() suppresses data integrity checking and can lead to incorrect query results.2). values (2. x INTEGER). The next two statements create table fact and the pre-join projection that joins fact to dim. -261- . CREATE TABLE fact(fk INTEGER REFERENCES dim(pk)). but you cannot delete the violating row because of the presence of the pre-join projection. Use this function only after you insert duplicate primary keys into a dimension table in the presence of a prejoin projection. x) AS SELECT * FROM dim ORDER BY x UNSEGMENTED ALL NODES. pk.

CREATE TEMP TABLE fact_temp(fk integer). 2 Temporarily suppresses the enforcement of data integrity checking: SELECT DISABLE_DUPLICATE_KEY_ERROR().1) and (1. -. 4 Allow the database to resume data integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(). If you receive the following error message.New insert statement joins fact with dim on primary key value=1 INSERT INTO dim values (1. except that an additional INSERT statement joins the fact table to the dimension table by a primary key value of 1: INSERT INTO dim values (1.original dim row INSERT INTO fact_temp SELECT * FROM fact WHERE fk=1. a row with values from the fact and dimension table is now in the prejoin projection. That is. ROLLBACK: Delete: could not find a data row to delete (data integrity violation?) The difference between this message and the rollback message in the previous example is that a fact row contains a foreign key that matches the duplicated primary key.2).2). first save the original dim and fact rows that match the duplicated primary key: CREATE TEMP TABLE dim_temp(pk integer. In order for the DELETE statement (Step 3 in the following example) to complete successfully. 3 Remove the the original row that contains duplicate values: DELETE FROM dim WHERE pk=1. as well. 3 Remove the duplicate primary keys. x integer). This example is nearly identical to the previous example. it means that the duplicate records you want to delete are not identical. -. INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1. the records contain values that differ in at least one column that is not a primary key.SQL Reference Manual INSERT INTO dim_temp SELECT * FROM dim WHERE pk=1 AND x=1.2). Thus. Caution: Remember that issuing this command suppresses the enforcement of data integrity checking. 1 To remove the violation. (1. -. extra predicates are required to identify the original dimension table values (the values that are in the prejoin). 6 Validate your dimension and fact tables.1). which has been inserted. INSERT INTO fact values (1).original dim row 2 Temporarily disable error messaging on duplicate constraint values: SELECT DISABLE_DUPLICATE_KEY_ERROR(). for example. INSERT INTO dim values (2.Duplicate primary key value=1 COMMIT. 5 Reinsert the original values back into the dimension table: INSERT INTO dim SELECT * from dim_temp. -. COMMIT. These steps implicitly remove all fact rows with the matching foreign key. -262- .

2) values that caused the violation. b) Remove all remaining rows: DELETE FROM dim WHERE pk=1. 4 Turn on integrity checking: SELECT REENABLE_DUPLICATE_KEY_ERROR(). -263- . display_license ----------------------------------------------------Vertica Systems. Syntax SELECT DO_TM_TASK( 'moveout' .1) row. Syntax SELECT DISPLAY_LICENSE() Examples SELECT DISPLAY_LICENSE(). table_name [ projection ] ). INSERT INTO fact SELECT * from fact_temp. 2007-08-03 Perpetual 0 500GB (1 row) DO_TM_TASK Runs a Tuple Mover operation (moveout) on one or more projections defined on the specified table. See Also ANALYZE_CONSTRAINTS (page 241) REENABLE_DUPLICATE_KEY_ERROR (page 292) DISPLAY_LICENSE Returns license information. Note: The extra predicate (x=1) specifies removal of the original (1. 5 Reinsert the original values back into the fact and dimension table: INSERT INTO dim SELECT * from dim_temp. COMMIT. rather than the newly inserted (1.SQL Functions a) Remove the the original row that contains duplicate values: DELETE FROM dim WHERE pk=1 AND x=1. 6 Validate your dimension and fact tables. Inc.

DO_TM_TASK looks for a projection of that name and. uses it. all projections in the system are used. if found. [Optional] If projection is not passed as an argument. without having to name each projection individually. If projection is specified. To move data out of the WOS: Advance the epoch by issuing the following command: SELECT ADVANCE_EPOCH (page 240)(). if a named projection is not found. the function looks for a table with that name and. you must stop the load before you can to drop a partition. You do not need to stop the Tuple Mover. 3 To determine if data remains in the WOS and. 'table-name') This function performs a moveout of all projections defined over the specified table. 2 Move data out of the WOS with the command: SELECT DO_TM_TASK (page 263)('moveout'. Note: These tables show all data in the WOS.SQL Reference Manual Parameters moveout table_name projection Moves out all projections on the specified table (if a particular projection is not specified). if found. Notes DO_TM_TASK() is useful because you can move out all projections from a table or database. See Also COLUMN_STORAGE (page 426) DROP_PARTITION (page 266) 1 -264- . § If data remains in the WOS. Is the name of the specified table. if so. Data in the WOS Note: If you are continuously loading data into the WOS. which projections have data in the WOS. not just committed data. moves out all projections on that table. The WOS is not partitioned. use PROJECTION_STORAGE (page 448) to determine which projections have data in the WOS: SELECT * FROM PROJECTION_STORAGE. use the following system tables: § Use the SYSTEM (page 461) table to determine if any data remains in the WOS: SELECT WOS_BYTES FROM SYSTEM. so it must be empty for partitioning to succeed.

Is the Vertica site where the location is available. 'fact_p') Result Performs a moveout of all projections for table fact Performs a moveout for projection fact_p DROP_LOCATION Removes the specified storage location. wait for the ATM to mergeout the data files automatically. you can easily restore a retired storage location. This will allow you to verify that you actually want to drop a storage location before doing so. Therefore. the location may still contain data files. Additionally. Vertica recommends that you retire a storage location before dropping it. Vertica will not allow you to drop it. or you can drop partitions. You can manually merge out all the data in this location. Syntax DROP_LOCATION ( 'path' . Dropping storage locations is limited to locations that contain only temp files. 'fact'). 'site' ) Parameters path site Specifies where the storage location to drop is mounted. • • Example The following example drops a storage location on node3 that was used to store temp files: -265- . Notes • Dropping a storage location is a permanent operation and cannot be undone. Deleting data files will not work. If a location used to store data and you modified it to store only temp files.SQL Functions DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 270) DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) SELECT ADVANCE_EPOCH (page 240) Partitioning Tables in the Administrator's Guide Examples Expression DO_TM_TASK('moveout'. DO_TM_TASK('moveout'. If the storage location contains data files.

Syntax DROP_PARTITION [ ( table_name ) . 'node3'). 'table-name') This function performs a moveout of all projections defined over the specified table. so it must be empty for partitioning to succeed. for example: DROP_PARTITION('t1'. 2 Move data out of the WOS with the command: SELECT DO_TM_TASK (page 263)('moveout'. single quotes are optional for INT and FLOAT data types. The specified table cannot be used as a dimension in a pre-joined projection. in order that the same information be available across all nodes. All other data types require quotes. The specified table cannot contain projections in non up-to-date state.SQL Reference Manual SELECT DROP_LOCATION('/secondVerticaStorageLocation/' . The WOS is not partitioned. Must be specified as a string (within quotes) for all data types. To move data out of the WOS: Advance the epoch by issuing the following command: SELECT ADVANCE_EPOCH (page 240)(). -266- . For example. you must stop the load before you can to drop a partition. See Also • • RETIRE_LOCATION (page 294) in this SQL Reference Guide Dropping Storage Locations and Retiring Storage Locations in the Administrator's Guide DROP_PARTITION Forces the partition of projections (if needed) and then drops the specified partition. 2+2 always equals 4. 2 1 Immutable functions return the same answers when provided the same inputs. Projections anchored on the specified table cannot have data in the WOS. Data in the WOS Note: If you are continuously loading data into the WOS. Notes Partitioning functions take immutable2 functions only. Restrictions • • • • When specifying arguments in CREATE TABLE … PARTITION BY (page 346) expressions. '2'). ( partition_value ) ] Parameters table_name partition_value Specifies the name of the table.

.... if so. The following example partitions and drops data by state: CREATE TABLE fact ( -267- . which projections have data in the WOS. See Also ALTER PROJECTION MOVEOUT (page 314) ADVANCE EPOCH (page 240) CREATE TABLE (page 346) DO_TM_TASK (page 263) DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 270) DUMP_TABLE_PARTITION_KEYS (page 271) MERGE_PARTITIONS (page 287) PARTITION_PROJECTION (page 288) PARTITION_TABLE (page 289) VT_COLUMN_STORAGE (page 469) VT_PROJECTION (page 481) Partitioning Tables in the Administrator's Guide Examples The following sequence of commands partitions and drops data by year: CREATE TABLE fact ( . ) PARTITION BY extract('year' FROM date_col). date_col date NOT NULL.SQL Functions 3 To determine if data remains in the WOS and. Or SELECT DROP_PARTITION ('fact'. use the following system tables: § Use the SYSTEM (page 461) table to determine if any data remains in the WOS: SELECT WOS_BYTES FROM SYSTEM. Note: These tables show all data in the WOS. use PROJECTION_STORAGE (page 448) to determine which projections have data in the WOS: SELECT * FROM PROJECTION_STORAGE. extract ( 'year' FROM '1999-01-01'::date).. 1999). § If data remains in the WOS. not just committed data. SELECT DROP_PARTITION ('fact'. .

.. month INTEGER NOT NULL. Using a constant for Oct 2007.... year INTEGER NOT NULL. . -268- .. ) PARTITION BY state. SELECT DROP_PARTITION ('fact'... The following example partitions and drops data by year and month: CREATE TABLE fact ( . state VARCHAR2 NOT NULL.. 2007*12 + 10 = 24094: SELECT DROP_PARTITION('fact'. . ) PARTITION BY year * 12 + month. 'MA').SQL Reference Manual . ‘24094’)..

This function is used for diagnostic purposes.txt SELECT DUMP_CATALOG(). Example This example shows how to extract the catalog into a file (/tmp/catalog. Syntax SELECT DUMP_CATALOG() Notes Send the output to Technical Support (on page 33).269 DUMP_CATALOG Returns an internal representation of the Vertica catalog. \o -269- .txt) \o /tmp/catalog.

send the output to Technical Support (on page 33). See Also DO_TM_TASK (page 263) DROP_PARTITION (page 266) DUMP_PARTITION_KEYS (page 270) -270- . Syntax DUMP_PARTITION_KEYS( ) See Also DO_TM_TASK (page 263) DROP_PARTITION (page 266) DUMP_PROJECTION_PARTITION_KEYS (page 270) DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) PARTITION_TABLE (page 289) Partitioning Tables in the Administrator's Guide DUMP_PROJECTION_PARTITION_KEYS Dumps the partition keys of the specified projection. DUMP_PARTITION_KEYS Dumps the partition keys of all projections in the system. Syntax SELECT DUMP_LOCKTABLE() Notes Use DUMP_LOCKTABLE if Vertica becomes unresponsive. Syntax DUMP_PROJECTION_PARTITION_KEYS( projection_name ) Parameters projection_name Specifies the name of the projection.270 DUMP_LOCKTABLE Determines whether or not a lock has been released.

system views. { design | design_all } ) Parameters filename design design_all Specifies the path and name of the SQL output file.SQL Functions DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) PARTITION_TABLE (page 289) Partitioning Tables in the Administrator's Guide DUMP_TABLE_PARTITION_KEYS Dumps the partition keys of all projections anchored on the specified table. -271- . and the projections on these system tables. system tables. Instructs Vertica to export the catalog. systems schemas. Syntax EXPORT_CATALOG( filename . See Also DO_TM_TASK (page 263) DROP_PARTITION (page 266) DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) PARTITION_TABLE (page 289) Partitioning Tables in the Administrator's Guide EXPORT_CATALOG Generates a SQL script that can be used to recreate a physical schema design in its current state on a different cluster. Syntax DUMP_TABLE_PARTITION_KEYS( table_name ) Parameters table_name Specifies the name of the table. Instructs Vertica to export the catalog. An empty string dumps the script to console.

Use the design_all parameter when adding a node to a cluster. Restrictions The export script Vertica generates is portable as long as all the projections were generated using UNSEGMENTED ALL NODES or SEGMENTED ALL NODES. A projection was created only on a subset of nodes. Projections might not exist on ALL NODES for the following reasons: • • • A projection was dropped from a node. Notes • This function exports all the Database Designer tables that were created in a design context that relate to the specified design configuration. Specifies the name of the design configuration to export. design_name . Specifies a unique name for the file generated by this function. It does not export design tables from the logical schema that are referenced by Database Designer. If a path is not specified with the export_file_name. when run in sequence. The sort order is explicitly defined in the exported script. See Integrating Data Into a New Database Design. export_file_name ) Parameters design_context_name existing_design_name saved_design_name Specifies the name of the design context schema that contains the design configuration to export. If a projection is created with no sort order. (See EXPORT_DESIGN_TABLES (page 273). EXPORT_DESIGN_CONFIGURATION Generates a VSQL script that can be used to recreate the design configuration on another system. • -272- . Syntax EXPORT_DESIGN_CONFIGURATION ( design_context_name . An additional node was added since the projection set was created.SQL Reference Manual Notes • • • Exporting a design is useful for quickly moving a design to another cluster. Vertica creates the file in the catalog directory. Vertica implicitly assigns a sort order based on the SELECT columns in the projection definition. can be used to recreate the logical schema and design context in another database.) Use EXPORT_DESIGN_TABLES and EXPORT_DESIGN_CONFIGURATION to produce two scripts which.

It also exports all the schemas. design_name .'VMartDesign'. Specifies the name of the design to export.'VMartDesign'.SQL Functions Example The following example creates a file named VMartDesignExport that contains the necessary data to recreate the VMartDesign configuration. SELECT EXPORT_DESIGN_TABLES('vmart'. export_file__name ) Parameters design_context_name existing_design_name saved_design_name Specifies the name of the design context schema that contains the design to export. recreates all the database structures for the logical schema that are referenced by the design context. and views associated with these tables. This function is useful for exporting a design to another system. Vertica creates the file in the catalog directory.'VMartExport'). Specifies a unique name for the file generated by this function. when run. See Also EXPORT_DESIGN_CONFIGURATION (page 272) EXPORT_STATISTICS Generates an XML file that contains statistics for the database. See Also EXPORT_DESIGN_TABLES (page 273) EXPORT_DESIGN_TABLES Generates a script that.'VMartDesignExport'). If a path is not specified with the export_file_name. Syntax EXPORT_DESIGN_TABLES ( design_context_name . Notes • • • This function exports all the design tables from the logical schema that are referenced by the design context specified. SELECT EXPORT_DESIGN_CONFIGURATION('vmart'. constraints. projections. -273- . Example The following example creates a file named VMartExport that contains data to recreate all the database structures for the logical schema associated with the VMartDesign within the vmart design context.

Syntax GET_AHM_TIME() Examples SELECT GET_AHM_TIME(). be sure to run ANALYZE_STATISTICS to collect and aggregate data samples and storage information. get_ahm_epoch ---------------------Current AHM epoch: 0 (1 row) GET_AHM_TIME Returns a TIMESTAMP value representing the Ancient History Mark. Data deleted up to and including the AHM epoch can be purged from physical storage.97574-05 (1 row) -274- . Examples SELECT GET_AHM_EPOCH(). Syntax GET_AHM_EPOCH() Note: The AHM epoch is 0 (zero) by default (purge is disabled). Data deleted up to and including the AHM epoch can be purged from physical storage. If you do not use ANALYZE_STATISTICS. GET_AHM_EPOCH Returns the number of the epoch in which the Ancient History Mark is located. Before you export statistics for the database. get_ahm_time -----------------------------------------------Current AHM Time: 2009-02-17 16:13:19.SQL Reference Manual Syntax EXPORT_STATISTICS( filename ) Parameters filename Specifies the path and name of the XML output file. Notes • • EXPORT_STATISTICS is used in conjunction with ANALYZE_STATISTICS (page 247) and READ_DATA_STATISTICS (page 291) to provide information regarding the database to Database Designer to use to generate designs. An empty string dumps the script to console. Database Designer will produce a sub-optimal projection similar to the ones created for temporary designs.

Specifies the specific design configuration for which to generate a sql script. get_current_epoch ---------------------(1 row) GET_DESIGN_SCRIPT Generates a SQL query that you can use to create the physical schema (projections). GET_CURRENT_EPOCH The GET_CURRENT_EPOCH function returns the number of the current epoch. Syntax GET_CURRENT_EPOCH() Examples => SELECT GET_CURRENT_EPOCH(). design__name ) Parameters design_context_name design_name Specifies the name of the design context schema that contains the design for which to generate a sql script. Example The following example generates a SQL query for the VMartDesign in the vmart context. use \o to redirect the results to a file.SQL Functions See Also SET DATESTYLE (page 396) for information about valid TIMESTAMP (page 99) values.'VMartDesign'). and DELETE operations) is currently being written. Syntax GET_DESIGN_SCRIPT ( design_context_name . -275- . Notes § To create a file that contains the contents of the query. INSERT. UPDATE. The current epoch is the epoch into which data (COPY. SELECT GET_DESIGN_SCRIPT('vmart'. The current epoch advances automatically every three minutes.

This function cannot be called for multi-node loads. The last good epoch is a term used in manual recovery and refers to the most recent epoch that can be recovered. Data regarding loads does not persist. Syntax GET_NUM_REJECTED_ROWS(). and is dropped when a new load is initiated. Syntax GET_LAST_GOOD_EPOCH() Examples => SELECT GET_LAST_GOOD_EPOCH(). and is dropped when a new load is initiated. Check VT_LOAD_STREAMS (page 476) for its status. Notes • • • Only loads from STDIN or a single file on the initiator are supported. returns the number of rows loaded into the database for the last completed load. This function cannot be called for multi-node loads. returns the number of rows that were rejected during the last completed load. GET_NUM_REJECTED_ROWS For the current session. Notes • • • Only loads from STDIN or a single file on the initiator are supported. get_last_good_epoch ---------------------Last Good Epoch: 5598 (1 row) GET_NUM_ACCEPTED_ROWS For the current session. Information is not available for a load that is currently running.SQL Reference Manual GET_LAST_GOOD_EPOCH The GET_LAST_GOOD_EPOCH function returns the number of the last good epoch. Data regarding loads does not persist. Information is not available for a load that is currently running. Check VT_LOAD_STREAMS (page 476) for its status. -276- . Syntax GET_NUM_ACCEPTED_ROWS().

# of Nodes: 1. To view a list of the nodes in a database. Examples SELECT GET_PROJECTION_STATUS('t1_sp2'). -277- . Vertica still supports the former function name. GET_TABLE_PROJECTIONS Note: This function was formerly named GET_TABLE_PROJECTIONS(). specify the schema that contains the projection. use the View Database Command in the Administration Tools. Description GET_PROJECTION_STATUS returns information relevant to the status of a projection: • • • • • • The current K-Safety status of the database The number of nodes in the database Whether or not the projection is segmented The number and names of buddy projections Whether or not the projection is safe Whether or not the projection is up-to-date Notes • • • You can use GET_PROJECTION_STATUS to monitor the progress of a projection data refresh. When using GET_PROJECTION_STATUS or GET_PROJECTIONS you must provide the name and node (for example.]projection ).]proje ction Is the name of the projection for which to display status. Parameters [schema-name. t1_sp2 [Segmented: No] [# of Buddies: 0] [No buddy projections] [Safe: Yes] [UptoDate: No] See Also ALTER PROJECTION (page 314) GET_PROJECTIONS (page 277) GET_PROJECTIONS. get_projection_status ----------------------------------------------------------------------------------------------Current system K is 0. When using more than one schema. Syntax GET_PROJECTION_STATUS( [schema-name.SQL Functions GET_PROJECTION_STATUS Returns information relevant to the status of a projection. ABC_NODE01) instead of just ABC. See ALTER PROJECTION (page 314).

]table ) Parameters [schema-name. To view a list of the nodes in a database.store_dimension_node0002] [Safe: Yes] [UptoDate: Yes][Stats: Yes] (1 row) -278- . store. store.store_dimension_node0002. use the View Database Command in the Administration Tools.store_dimension_node0003 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimension has 4 projections. Notes • • • You can use GET_PROJECTIONS to monitor the progress of a projection data refresh.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store. Table store.store_dimension_node0003. -------------------------------------------------------------------------------------Current system K is 1.store_dimension_node0004 [Segmented: No] [Seg Cols: ] [K: 3] [store.store_dimension_node0002. ABC_NODE01) instead of just ABC.store_dimension_node0002 [Segmented: No] [Seg Cols: ] [K: 3] [store. store.store_dimension_node0004. specify the schema that contains the table. store. store. store.store_dimension_node0004. you must provide the name and node (for example.store_dimension_node0004. Projection Name: [Segmented] [Seg Cols] [# of Buddies] [Buddy Projections] [Safe] [UptoDate] ---------------------------------------------------------store.store_dimension'). When using more than one schema. See ALTER PROJECTION (page 314). store.store_dimension_node0003.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store. store.SQL Reference Manual Returns information relevant to the status of a table: • • • • The current K-Safety status of the database The number of sites (nodes) in the database The number of projections for which the specified table is the anchor table For each projection: § The projection's buddy projections § Whether or not the projection is segmented § Whether or not the projection is safe § Whether or not the projection is up-to-date Syntax GET_PROJECTIONS( [schema-name.store_dimension_node0001 [Segmented: No] [Seg Cols: ] [K: 3] [store. # of Nodes: 4. Examples The following example gets information about the store_dimension table in the VMart schema: SELECT GET_PROJECTIONS('store.]table Is the name of the table for which to list projections. When using GET_PROJECTIONS or GET_PROJECTION_STATUS.store_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] store.store_dimension_node0003.

Notes • • Vertica does not create projections for any table that already has a safe super projection. Sessions can be interrupted during statement execution. If the stmtid is valid. specifies the statement to interrupt Notes • • • Only statements run by external sessions can be interrupted. Otherwise the system returns an error. INTERRUPT_STATEMENT Interrupts the specified statement (within an external session). Check SESSIONS for progress.SQL Functions See Also ALTER PROJECTION (page 314) GET_PROJECTION_STATUS (page 277) IMPLEMENT_TEMP_DESIGN Creates and implements a temporary physical schema design (projections). Messages Statement interrupt sent. -279- . Syntax INTERRUPT_STATEMENT( sessionid . stmtid ) Parameters sessionid stmtid specifies the session to interrupt. The command is successfully sent and returns a success message. Vertica creates temporary projections for all the tables in all named schema. Success. This identifier is unique within the cluster at any point in time. Syntax IMPLEMENT_TEMP_DESIGN ( 'table_name' ) IMPLEMENT_TEMP_DESIGN ( '' ) Parameters table_name Specifies the name of the table for which to create a projection. If an empty string is provided instead of a table name. This function returns the number of projections created. and writes a success or failure message to the log file. rolls back the current transaction. the statement is interruptible.

0. -[ RECORD 1 ] current_timestamp node_name user_name client_hostname login_timestamp session_id transaction_start transaction_id transaction_description statement_start statement_id last_statement_duration current_statement last_statement -[ RECORD 2 ] current_timestamp node_name user_name client_hostname login_timestamp session_id transaction_start transaction_id transaction_description | | | | | | | | | | | | | | | 2008-04-01 15:00:20 site01 release 127.) statement_start | 2008-04-01 15:00:17 statement_id | 17179869529 last_statement_duration | 0 current_statement | COPY ClickStream_Fact FROM '/scratch_b/qa/vertica/QA/ -280- .0. select * from sessions. Examples Two user session are open. The session is internal.0.1:57150 | 2008-04-01 14:58:56 | rhel4-1-30361:0xd868a:1015659125 | 2008-04-01 15:00:17 | 45035996273741101 | user release (COPY ClickStream_Fact FROM | '/scratch_b/qa/vertica/QA/VT_Scenario/data/clickstream | /1g/ClickStream_Fact. No interruptible statement running If the statement is DDL or otherwise non-interruptible. and RECORD 2 shows user session running COPY DIRECT: => select * from sessions.SQL Reference Manual Session <id> could not be successfully interrupted: session not found The session ID argument to the interrupt command does not match a running session. Internal (system) sessions cannot be interrupted. | 2008-04-01 15:00:20 | site01 | release | 127.1:57141 2008-04-01 14:41:26 rhel4-1-30361:0xd7e3e:994462853 2008-04-01 14:48:54 45035996273741092 user release (select * from sessions. RECORD 1 shows user session running SELECT FROM SESSION.0.tbl' DELIMITER '|' NULL '\\n' DIRECT.) 2008-04-01 15:00:20 0 1 select * from sessions. Session <id> could not be successfully interrupted: statement not found The statement ID does not (or no longer) matches the ID of a running statement (if any).

-281- .) statement_start | 2008-04-01 15:01:09 statement_id | 0 last_statement_duration | 99 current_statement | select * from sessions. | -[ RECORD 2 ] current_timestamp | 2008-04-01 15:01:09 node_name | site01 user_name | release client_hostname | 127.0. It appears in the last statement field: => SELECT * FROM SESSIONS. Check SESSIONS for progress.1:57150 login_timestamp | 2008-04-01 14:58:56 session_id | rhel4-1-30361:0xd868a:1015659125 transaction_start | 2008-04-01 15:00:17 transaction_id | 45035996273741101 transaction_description | user release (COPY ClickStream_Fact FROM | '/scratch_b/qa/vertica/QA/VT_Scenario/data/clickstream/ | 1g/ClickStream_Fact.tbl' DELIMITER '|' NULL '\\n' DIRECT.) statement_start | 2008-04-01 15:00:17 statement_id | 0 last_statement_duration | 44456 current_statement | last_statement | COPY ClickStream_Fact FROM '/scratch_b/qa/vertica/QA/ | VT_Scenario/data/clickstream/1g/ClickStream_Fact.0. last_statement | select interrupt_statement('rhel4-1-30361:0xd868a:1015659125'. last_statement | Interrupt the COPY DIRECT statement running in session rhel4-1-30361:0xd868a:1015659125: => SELECT INTERRUPT_STATEMENT('rhel4-1-30361:0xd868a:1015659125'.tbl' DELIMITER '|' | NULL '\\n' DIRECT. Verify that the interrupted statement is no longer active. Interrupt_statement.1:57141 login_timestamp | 2008-04-01 14:41:26 session_id | rhel4-1-30361:0xd7e3e:994462853 transaction_start | 2008-04-01 14:48:54 transaction_id | 45035996273741092 transaction_description | user release (select * from sessions.tbl' DELIMITER '|' | NULL '\\n' DIRECT.17179869529). -[ RECORD 1 ] current_timestamp | 2008-04-01 15:01:09 node_name | site01 user_name | release client_hostname | 127.0.17179869529). statement interrupt sent.0.SQL Functions | VT_Scenario/data/clickstream/1g/ClickStream_Fact.

Once you gather data statistics. bool_value ) Parameters design_context_name bool_value Specifies the name of the design context schema in which to load database statistics. Determines if the statistics are regenerated before loading them into the design context Use one of the following: true (to regenerate the data statistics) false Notes • • • To load data statistics. It is necessary to regenerate statistics the first time you create a design context or whenever you modify design tables. If this parameter is set to true. gathering statistics can take a significant amount of time. the function runs analyze_statistics on each design table. This means that they are used by every design created for the database. true). • • Example The following example regenerates and then loads data statistics into the vmart design context: SELECT LOAD_DATA_STATISTICS('vmart'. and you do not need to regenerate them again unless you modify the tables that you have included in one or more designs. Depending upon the size of your database. they persist for the database. all database nodes must be up. Use the regenerate_stats parameter (true) to update the statistics before loading them into the data statistics table. READ_DATA_STATISTICS (page 291) LOAD_DESIGN_QUERIES Updates the specified query input table with queries from the query input file specified. LOAD _DATA_STATISTICS loads existing data statistics for the design tables into the design context. Syntax LOAD_DATA_STATISTICS ( design_context_name . By default. file_name ) -282- . See Also ANALYZE_STATISTICS (page 247). the function will fail. Syntax LOAD_DESIGN_QUERIES ( query_table_name .SQL Reference Manual LOAD_DATA_STATISTICS Loads data into the design context specified. Otherwise. Loading new or modified data stats could change the behavior of existing queries because the optimizer uses these statistics.

the current epoch at the time MAKE_AHM_NOW() was issued. you will not be able to perform historical queries prior to the current epoch. You can use this function in conjunction with CREATE_DESIGN_QUERIES_TABLE (page 257) and SET_DESIGN_QUERIES_TABLE (page 303) to create a new query input table. load it with data from the query repository. § Create additional queries after adding additional columns to one or more tables. It's also used to add additional queries to designs. '/scratch/examples/VMart_Schema/QueryFile'). Syntax MAKE_AHM_NOW() Notes • The MAKE_AHM_NOW function performs the following operations: § Advances the epoch. Therefore. Caution: This function is intended only for users who administer databases on systems that belong to them. The query input file has a 10 MB limit. § Performs a moveout operation on all projections. This enables you to work on a design even if you do not have access to files on the database server. For example. All history will lost. you might want to: § Create targeted queries for a CEO and add them to a design. which should be. • • Example The following example loads queries from the QueryFile into a query input table named vmart_query_input: SELECT LOAD_DESIGN_QUERIES('vmart_query_input'. § Sets the AHM to LGE. and lets you drop any projections that existed before the issue occurred. at least. Specifies the absolute path of the query input file from which to obtain queries. • -283- . Notes • LOAD_DESIGN_QUERIES is used to load initial queries at design creation into a design. and establish it as the query input table for the specified design.SQL Functions Parameters query_table_name file_name Specifies the name of the query input table to update. MAKE_AHM_NOW Sets the Ancient History Mark (AHM) to the greatest allowable value.

Example SELECT MAKE_AHM_NOW(). However. make_ahm_now ------------------------------AHM set (New AHM Epoch: 5613) (1 row) See Also DROP PROJECTION (page 360) MARK_DESIGN_KSAFE (page 285) SET_AHM_EPOCH (page 298) SET_AHM_TIME (page 300) -284- . if the AHM is advanced in this way while nodes are down. the nodes must recover all data from scratch.SQL Reference Manual • This function succeeds even when nodes are down.

Notes • • • The database's internal recovery state persists across database restarts but it is not checked at startup time. Projections are considered to be buddies if they contain the same columns and have the same segmentation. MARK_DESIGN_KSAFE queries the catalog to determine whether a cluster's physical schema design meets the following requirements: • • • • Dimension tables are replicated on all nodes. Each fact table projection has at least one "buddy" projection for K-Safety=1 or two buddy projections for K-Safety=2. -285- . Two nodes are required for K-Safety=1 and three nodes are required for K-Safety=2. MARK_DESIGN_KSAFE does not change the physical schema in any way. Vertica returns one of the following messages. Fact table projection projection-name has insufficient "buddy" projections. When one node fails on a system marked K-safe=1. They can have different sort orders. in case of a failure. the remaining nodes are available for DML operations. n in the message is 1 or 2 and represents the k value. Fact table superprojections are segmented with each segment on a different node. Syntax SELECT MARK_DESIGN_KSAFE(k) Parameters k 2 enables high availability if the schema design meets requirements for K-Safety=2 1 enables high availability if the schema design meets requirements for K-Safety=1 0 disables high availability If you specify a k value of one (1) or two (2). Before enabling recovery. If a database has had MARK_DESIGN_KSAFE enabled.285 MARK_DESIGN_KSAFE Enables or disables high availability in your environment. you must temporarily disable MARK_DESIGN_KSAFE before loading data into a new table with its corresponding buddy. Each segment of each fact table projection exists on two or three nodes. Success: Marked design n-safe Failure: The schema does not meet requirements for K=n.

see SYSTEM (page 461) in the SQL System Tables (Monitoring API) (page 409). . node defaults to the initiator. Syntax MEASURE_LOCATION_PERFORMANCE ( path . The given K value is not correct. messages indicate which projections do not have a buddy: > SELECT MARK_DESIGN_KSAFE(1). mark_design_ksafe ---------------------Marked design 1-safe (1 row) If the physical schema design is not K-Safe. For information about designing segmented projections for K-Safety. (1 row) See Also • • • • High Availability and Recovery in the Concepts Guide. -286- .SQL Reference Manual Examples > SELECT MARK_DESIGN_KSAFE(1). For information about troubleshooting K-Safety. which is smaller that the given K of 1 . [Optional] Is the Vertica node where the location to be measured is available. . [ node ] ) Parameters path node Specifies where the storage location to measure is mounted. see Using Identically Segmented Projections in the Administrator's Guide. For information about monitoring K-Safety. If this parameter is omitted. which is smaller that the given K of 1 Projection pp2 has 0 buddies. the schema is 0-safe Projection pp1 has 0 buddies. see Failure Recovery in the Troubleshooting Guide MEASURE_LOCATION_PERFORMANCE Measures disk performance for the location specified.

. ALTER_LOCATION_USE. as follows: Read Time = Throughput (MB/second) + Latency (seeks/second) Therefore. Latency : 140 seeks/sec See Also ADD_LOCATION (page 239). . 'node2').. RETIRE_LOCATION (page 294). • • Example The following example measures the performance of a storage location on node2: SELECT MEASURE_LOCATION_PERFORMANCE('/secondVerticaStorageLocation/' . you need to measure storage location performance for each location in which data will be stored.. This method of measuring storage location performance applies only to configured clusters. This read time equates to the disk throughput in MB per second plus the time it takes to seek data based on the number of seeks per second. MERGE_PARTITIONS Merges ROSs that belong to partitions in a specified partition key range. and partitions are stored on different disks based on predicted or measured access patterns. Storage location performance equates to the amount of time it takes to read a fixed amount of data from the disk. ( partitionKeyFrom ) .. Syntax MERGE_PARTITIONS [ . Please check logs for progress measure_location_performance -------------------------------------------------Throughput : 122 MB/sec.. a disk is faster than another disk if its Read Time is smaller. and Measuring Location Performance. columns. . WARNING: measure_location_performance can take a long time.. ( table_name ) .SQL Functions Notes • If you intend to create a tiered disk architecture in which projections. ( partitionKeyTo ) ] Parameters table_name partitionKeyFrom partitionKeyTo Specifies the name of the table Specifies the start point of the partition Specifies the end point of the partition -287- . You do not need to measure storage location performance for temp data storage locations because temporary files are stored based on available space. If you want to measure a disk before configuring a cluster see ???.

'400'). 'MA'). then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers.SQL Reference Manual Description MERGE_PARTITIONS merges ROSs that have data belonging to partitions in a specified partition key range: [ partitionKeyFrom. '800'). The edge values are included in the range. Partitioning functions take invariant functions only. '06/07/2008'). MERGE_PARTITIONS('T1'. MERGE_PARTITIONS('T1'. Syntax PARTITION_PROJECTION ( projection_name ) Parameters projection_name Specifies the name of the projection. After a refresh completes. Notes • • • • • Partitioning functions take invariant functions only. '800'. MERGE_PARTITIONS('T1'. -288- . '06/07/2008 02:01:10'). in order that the same information be available across all nodes. '04:20:40'). the refreshed projections go into a single ROS container. Notes PARTITION_PROJECTION is similar to PARTITION_TABLE (page 289)(). PARTITION_PROJECTION Forces a split of ROS containers of the specified projection. '1 day 4 hours 20 seconds'). MERGE_PARTITIONS('T1'. all ROSs of the partition key are merged into one ROS. Inclusion of partitions in the range is based on the application of less than(<)/greater than(>) operators of the corresponding data type. If the table was created with a PARTITION BY clause. in order that the same information be available across all nodes. instead of the table. MERGE_PARTITIONS('T1'. Examples SELECT SELECT SELECT SELECT SELECT SELECT SELECT SELECT MERGE_PARTITIONS('T1'. except that PARTITION_PROJECTION works only on the specified projection. '8 hours'. No restrictions are placed on a partition key's data type. 'true'). 'false'. '06/06/2008'. '200'. '02:01:10'. MERGE_PARTITIONS('T1'. partitionKeyTo ]. If partitionKeyFrom is the same as partitionKeyTo. and partitionKeyFrom must be less than or equal to partitionKeyTo. 'CA'. '06/06/2008 02:01:10'. since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container. MERGE_PARTITIONS('T1'.

Partitioning functions take invariant functions only.SQL Functions See Also DO_TM_TASK (page 263) DROP_PARTITION (page 266) DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 270) DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_TABLE (page 289) Partitioning Tables in the Administrator's Guide PARTITION_TABLE Forces the system to break up any ROSs that contain multiple distinct values of the partitioning expression. After a refresh completes. If the table was created with a PARTITION BY clause. Notes PARTITION_TABLE is similar to PARTITION_PROJECTION (page 288). since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container. Only ROS containers with more than one distinct value participate in the split. in order that the same information be available across all nodes. Syntax PARTITION_TABLE ( table_name ) Parameters table_name Specifies the name of the table. the refreshed projections go into a single ROS container. then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers. See Also DO_TM_TASK (page 263) DROP_PARTITION (page 266) DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 270) DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) -289- . except that PARTITION_TABLE works on the specified table.

See Also Purging Deleted Data in the Administrator's Guide PURGE_TABLE Purges all projections of the specified table.]table-name ) -290- . A purge operation permanently removes deleted data from physical storage so that the disk space can be reused. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. Syntax PURGE_TABLE ( [schema-name. A purge operation permanently removes deleted data from physical storage so that the disk space can be reused. specify the schema that contains the projection. See Also PURGE_PROJECTION (page 290) PURGE_TABLE (page 290) Purging Deleted Data in the Administrator's Guide. When using more than one schema. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained. PURGE_PROJECTION Purges the specified projection. A purge operation permanently removes deleted data from physical storage so that the disk space can be reused. Syntax PURGE_PROJECTION ( [schema-name. Syntax PURGE() Notes This function was formerly named PURGE_ALL_PROJECTIONS. You can purge historical data up to and including the epoch in which the Ancient History Mark is contained.]projection ) Parameters projection Is the name of a specific projection. Vertica supports both function calls.SQL Reference Manual Partitioning Tables in the Administrator's Guide PURGE Purges all projections in the physical schema.

If a set of data statistics already exists for the design context. run ANALYZE_STATISTICS (page 247) to collect and aggregate data samples and storage information. Specifies the file path for the XML file to pass in. When using more than one schema. specify the schema that contains the projection. If you do not use ANALYZE_STATISTICS. it is deleted before the data is read. ANALYZE_STATISTICS (page 247). Example The following example purges all projections for the store sales fact table located in the Vmart schema: PURGE_TABLE('store. • See Also LOAD_DATA_STATISTICS (page 282). Notes • To create an XML file that contains data statistics. stats_file_name ) Parameters design_context_name stats_file_name Specifies the name of the design context schema in which to load database statistics. and EXPORT_STATISTICS (page 273) -291- .store_sales_fact'). Then use EXPORT_STATISTICS (page 273) to export this data to an XML file. See Also Purging Deleted Data in the Administrator's Guide READ_DATA_STATISTICS Loads data statistics for each design table from an external XML file into the design context specified. Syntax READ_DATA_STATISTICS ( design_context_name . Notes This function was formerly named PURGE_TABLE_PROJECTIONS(). Vertica still supports the old name. Database Designer will produce a sub-optimal projection similar to the ones created for temporary designs.SQL Functions Parameters table-name Is the name of a specific table in the Logical Schema.

Effects are session scoped. Specifies the specific design to delete. Syntax REMOVE_DESIGN ( design_context_name . Syntax SELECT REENABLE_DUPLICATE_KEY_ERROR(). SELECT REMOVE_DESIGN('vmart'. See Also ANALYZE_CONSTRAINTS (page 241) REMOVE_DESIGN Permanently removes all design repository records associated with the named physical schema design. EXAMPLE The following example removes the VMartDesign from the vmart context schema. REMOVE_DESIGN_CONTEXT Drops the design context schema with the specified name. Syntax REMOVE_DESIGN_CONTEXT ( design_context_name ) Parameters design_context_name Specifies the name of the design context schema to drop. This is a permanent operation. Notes • • The design context schema and all its corresponding design configurations are dropped. -292- .SQL Reference Manual REENABLE_DUPLICATE_KEY_ERROR Reenables the default behavior of error reporting by reversing the effects of DISABLE_DUPLICATE_KEY_ERROR. design__name ) Parameters design_context_name design_name Specifies the name of the design context schema that contains the design to remove.'VMartDesign'). Examples For examples and usage see DISABLE_DUPLICATE_KEY_ERROR (page 261).

-293- . Example The following drops the design context schema for the VMart demo database. Syntax RESET_DESIGN_QUERIES_TABLE ( design_context_name . design_name ) Parameters design_context_name Specifies the name of the design context schema to update. Specifies the specific design to modify. projection_name ) Parameters design_context_name design_name projection_name Specifies the name of the design context schema that contains the design to modify. if the projection was supposed to be dropped.SQL Functions Restrictions Only the database administrator can drop a design context schema. SELECT REMOVE_DESIGN_CONTEXT('vmart'). Notes • Removing a projection entry prevents the corresponding drop from being implemented for the projection. RESET_DESIGN_QUERIES_TABLE Removes the query input table from the design configuration. design_name . thus removing it from the next deployment. Therefore. Syntax REMOVE_DEPLOYMENT_ENTRY ( design_context_name .'VMartDesign'. Example The following removes the customer_tmp_dev02 projection from the VMartDesign_deployment_projections table: SELECT REMOVE_DEPLOYMENT_ENTRY('vmart'. it will remain in the deployment.'customer_tmp_dev02'). See Also CLEAR_DESIGN_TABLES (page 249) and REMOVE_DESIGN (page 292) REMOVE_DEPLOYMENT_ENTRY Removes a projection entry from the <designName>_deployment_projections table. Specifies the projection entry to be dropped.

node defaults to the initiator. Example The following example removes the query input table from the VMartDesign configuration: SELECT RESET_DESIGN_QUERIES_TABLE('vmart'. 'site' ) -294- . See Also CREATE_DESIGN_QUERIES_TABLE (page 257). [ node ] ) Parameters path node Specifies where the retired storage location is mounted. Notes This is useful for changing an optimized design into a basic design. RETIRE_LOCATION (page 294).'VMartDesign'). 'node3'). Syntax RETIRE_LOCATION ( 'path' . Syntax RESTORE_LOCATION_USE ( path . SET_DESIGN_QUERIES_TABLE (page 303) RESTORE_LOCATION Restores the retired location specified. Example The following example restores the retired storage location on node3: SELECT RESTORE_LOCATION ('/thirdVerticaStorageLocation/' . If this parameter is omitted. Vertica will re-rank the storage locations and use the restored location to process queries as determined by its rank. See Also ADD_LOCATION (page 239). and Modifying Storage Locations RETIRE_LOCATION Makes the specified storage location inactive.SQL Reference Manual design_name Specifies the name of the design configuration to update. Notes Once restored. [Optional] Is the Vertica node where the retired location is available.

If the storage location stored data. Example SELECT RETIRE_LOCATION ('/secondVerticaStorageLocation/' . -295- . it is removed through one or more mergeouts. See Dropping Storage Locations in the Administrators Guide and the DROP_LOCATION (page 265) function. it can be dropped. Instead. the location cannot be dropped. be sure that at least one location will remain for storing data and temp files. no new data can be stored on the location unless the location is restored through the RESTORE_LOCATION (page 294) function. Once retired. Therefore. Specifies the name of the design to drop. [ bool_value ] ) Parameters design_context_na me design_name bool_value Specifies the name of the design context schema that contains the design to drop. See Also • • ADD_LOCATION (page 239) and RESTORE_LOCATION (page 294) in this SQL Reference Guide Retiring Storage Locations in the Administrator's Guide REVERT_DEPLOYMENT Prepares a previously deployed design to be dropped from the deployment database and (optionally) drops it.SQL Functions Parameters path site Specifies where the storage location to retire is mounted. Notes • • • • Before retiring a location. Is the Vertica site where the location is available. Determines if the function calls RUN_DEPLOYMENT (true) or not (false) to drop the design. By default bool_value is true. If the storage site was used to store only temp files. 'node2'). design_name . the data is not moved. Data and temp files can be stored in either one storage location or separate storage locations. Syntax REVERT_DEPLOYMENT ( design_context_name .

bool_value is true. See the design_<design_name>_deployment_projections for more information about the operation status.'VMartDesign'). Tip: Setting bool_value to false gives you the opportunity to test the new design without dropping the original design. [ bool_value ] ) Parameters design_context_na me design_name bool_value Specifies the name of the design context schema that contains the design to deploy. Specifies whether existing projections are dropped (true) or not (false) during deployment. SELECT REVERT_DEPLOYMENT('vmart'. Syntax RUN_DEPLOYMENT ( design_context_name . By default.SQL Reference Manual Notes • REVERT_DEPLOYMENT prepares a design to be dropped by: § Setting the deployment status of the design to pending. See Deploying Test Designs. § By default. calling RUN_DEPLOYMENT to drop the design. See Also DEPLOY_DESIGN (page 259) and RUN_DEPLOYMENT (page 296) RUN_DEPLOYMENT Implements the design deployment created through either DEPLOY_DESIGN or DROP_DESIGN_DEPLOYMENT.true). -296- . § Determining all the projections that were deployed for the design and creating a 'drop' record for each of these projections in the design_<design_name>_deployment_projections table.'VMartDesign'. Notes • RUN_DEPLOYMENT implements the design by: § Implementing all new projections with an operation status of add. See the design_<design_name>_deployment view. design_name . SELECT REVERT_DEPLOYMENT('vmart'. Specifies the name of the design to deploy. Example The following examples both prepare the VMartDesign (from the vmart context) to be dropped and then call RUN_DEPLOYMENT to drop it.

Specifies the name of the design to back up. you can export a design candidate for testing or deployment. Typically these are all pre-existing projections from the deployment database. Notes • • The entire design configuration is saved including all design context system tables related to the design as well as query information. Vertica displays an error indicating that you must regenerate the design. Specifies a unique name for the backup design. They also drop all deployed projections from pre-existing deployments. -297- .SQL Functions § By default. This occurs because there is no deployment system table for the design. SELECT RUN_DEPLOYMENT('vmart'. SELECT RUN_DEPLOYMENT('vmart'.'VMartDesign'. dropping all projections with an operation status of drop. Once saved. SAVE_QUERY_REPOSITORY Triggers Vertica to save query data to the query repository immediately. This enables you to fine tune the deployment. You can also use it for design migration. § Refreshes new projections if necessary.0. Tip: You can use the REMOVE_DEPLOYMENT_ENTRY function to prevent specific projections from being added or dropped.'VMartDesign'). Note: If you attempt to deploy a design created in Vertica version 3. Syntax SAVE_DESIGN_VERSION ( design_context_name . CANCEL_DEPLOYMENT (page 248) and REVERT_DEPLOYMENT (page 295) SAVE_DESIGN_VERSION Generates a backup physical schema design. Use either CREATE_DESIGN (page 255) or UPDATE_DESIGN (page 310). saved_design__name ) existing_design__name . Parameters design_context_name existing_design_name saved_design_name Specifies the name of the design context schema that contains the design to back up.true). See Also DEPLOY_DESIGN (page 259). Example The following examples both deploy the VMartDesign from the vmart context.

Syntax SELECT CURRENT_SCHEMA Notes If the search path for USER1 is: $USER. current_schema ---------------public (1 row) SET_AHM_EPOCH Sets the Ancient History Mark (AHM) to the specified epoch. Consider SET_AHM_TIME (page 300) instead. (See Configuring Query Repository. [ true ] ) Parameters epoch Specifies one of the following: The number of the epoch in which to set the AHM -298- . which is easier to use. COMMON. it will use the value of the QueryRepoRetentionTime parameter to determine the maximum number of days worth of queries to save. PUBLIC: SELECT CURRENT_SCHEMA() returns the following output if schema USER1 exists: USER1 If schema USER1 does not exist.SQL Reference Manual Syntax SAVE_QUERY_REPOSITORY() Notes Vertica saves data based on the established query repository configuration parameters. Syntax SET_AHM_EPOCH ( epoch. This function allows deleted data up to and including the AHM epoch to be purged from physical storage. For example. it returns the following output: COMMON Example SELECT CURRENT_SCHEMA(). SET_AHM_EPOCH is normally used for testing purposes.) See Also Collecting Query Information SELECT CURRENT_SCHEMA Shows the resolved name of $User.

the number of the specified epoch must be: • • • Greater than the current AHM epoch Less than the current epoch Less than or equal to the cluster last good epoch (the minimum of the last good epochs of the individual nodes in the cluster) • Less than or equal to the cluster refresh epoch (the minimum of the refresh epochs of the individual nodes in the cluster) Use the SYSTEM (page 461) table to see current values of various epochs related to the AHM.log: Some nodes were excluded from setAHM. the following error is printed to the vertica. When a node is down and you issue SELECT MAKE_AHM_NOW(). -[ RECORD 1 ]------------+--------------------------current_timestamp | 2009-08-11 17:09:54. Examples The following command sets the AHM to a specified epoch of 12: SELECT SET_AHM_EPOCH(12).SQL Functions Zero (0) (the default) disables purge (page 290) true [Optional] Allows the AHM to advance when nodes are down. If their LGE is before the AHM they will perform full recovery. You cannot use SET_AHM_EPOCH when any node in the cluster is down. -299- . those nodes must recover all data from scratch. for example: SELECT * from SYSTEM.651413 current_epoch | 1512 ahm_epoch | 961 last_good_epoch | 1510 refresh_epoch | -1 designed_fault_tolerance | 1 node_count | 4 node_down_count | 0 current_fault_tolerance | 1 catalog_revision_number | 1590 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 41490783 ros_row_count | 1298104 total_used_bytes | 41490783 total_row_count | 1298104 All nodes must be up. Notes If you use SET_AHM_EPOCH . Note: If the AHM is advanced after the last good epoch of the failed nodes. except by using the optional true parameter.

Notes SET_AHM_TIME returns a TIMESTAMP WITH TIME ZONE value representing the end point of the AHM epoch. except by using the optional true parameter. Note: If the AHM is advanced after the last good epoch of the failed nodes. If an epoch includes a three-minute range of time. Syntax SET_AHM_TIME ( time .SQL Reference Manual The following command sets the AHM to a specified epoch of 2 and allows the AHM to advance despite a failed node: SELECT SET_AHM_EPOCH(2. the following error is printed to the vertica. those nodes must recover all data from scratch. When a node is down and you issue SELECT MAKE_AHM_NOW(). If their LGE is before the AHM they will perform full recovery. set_ahm_time ------------------------------------ -300- . [Optional] Allows the AHM to advance when nodes are down. the purge operation is accurate only to within minus three minutes of the specified timestamp: SELECT SET_AHM_TIME('2008-02-27 18:13').log: Some nodes were excluded from setAHM. true). See Also MAKE_AHM_NOW (page 283) SET_AHM_TIME (page 300) SYSTEM (page 461) SET_AHM_TIME Sets the Ancient History Mark (AHM) to the epoch corresponding to the specified time on the initiator node. Examples Epochs depend on a configured epoch advancement interval. [ true ] ) Parameters time true Is a TIMESTAMP (page 99) value that is automatically converted to the appropriate epoch number. You cannot change the AHM when any node in the cluster is down. This function allows historical data up to and including the AHM epoch to be purged from physical storage.

1.SQL Functions AHM set to '2008-02-27 18:11:50-05' (1 row) Note: The -05 part of the output string is a time zone value. Syntax SET_DESIGN_KSAFETY ( design_context_name . design_name . SET DATESTYLE (page 396) for information about specifying a TIMESTAMP (page 99) value. 1. For a cluster of three or more nodes. the actual AHM epoch ends at 18:11:50. or 2. suppose that epoch 9000 runs from 08:50 to 11:50 and epoch 9001 runs from 11:50 to 15:50. issue the following command instead: SELECT SET_AHM_TIME('$date'. the default is zero (0). See Also MAKE_AHM_NOW (page 283) SET_AHM_EPOCH (page 298) for a description of the range of valid epoch numbers. The value of K can be one 1 or two 2 only when the physical schema design meets certain requirements. traditionally known as Greenwich Mean Time. using only hours and minutes. Specifies the name of the design configuration to update. true). an offset in hours from UTC (Universal Coordinated Time. the value of K can be 0. roughly one minute before the specified timestamp. Notes In Vertica. -301- . int_ksafety_value ) Parameters design_context_name design_name int_ksafety_value Specifies the name of the design context schema to update. SET_AHM_TIME('11:51') chooses epoch 9000 because it ends roughly one minute before the specified timestamp. if given an environment variable set as date =`date`. It does not select the epoch that ends after the specified timestamp because that would purge data deleted as much as three minutes after the AHM. See K-Safety for an overview. Vertica uses the following defaults: For a cluster that consists of one or two nodes. In the above example. the default is one (1). This is because SET_AHM_TIME selects the epoch that ends at or before the specified timestamp. or 2. In the next example. In order to force the AHM to advance. the following command fails if a node is down: SELECT SET_AHM_TIME('$date'). SET_DESIGN_KSAFETY Sets the K-safety value for the physical schema design. Sets the K-safety value to 0. or GMT). For example.

Specifies the name of the design configuration to update.'VMartDesign'. 2). design_name .SQL Reference Manual Example The following example sets the K-safety for the design to two (2): SELECT SET_DESIGN_KSAFETY('vmart'.log is located in the same directory as vertica. By default.'VMartDesign'. this function resets the location of the log file back to the default location. -302- . SET_DESIGN_LOG_FILE Specifies the location where the debug log (designer. See Also SET_DESIGN_LOG_LEVEL (page 302) SET_DESIGN_LOG_LEVEL Specifies the level of detail Database Designer writes to the log file for the physical schema design. string_log_level ) Parameters design_context_name design_name string_log_level Specifies the name of the design context schema to update. designer. Example The following example sets the log level for the design to DEBUG: SELECT SET_DESIGN_LOG_LEVEL('vmart'.'DEBUG'). Specifies the name of the design configuration to modify. Sets the log level value to BASIC or DEBUG. If this string is empty. design_name .log) for the physical schema design is created.log. Specifies the location where the debug log for the design is created. Syntax SET_DESIGN_LOG_FILE ( design_context_name . The default is BASIC. Syntax SET_DESIGN_LOG_LEVEL ( design_context_name . log_file_name ) Parameters design_context_name design_name log_file_name Specifies the name of the design context schema to modify.

The default is false.'create_prejoins'. design_name .SQL Functions See Also SET_DESIGN_LOG_FILE (page 302) SET_DESIGN_PARAMETER Sets the specified physical schema design parameter to true or false. Database Designer uses different sort orders for buddies to offer slightly different optimizations. you can turn them off. Sets one of the following parameters: create_prejoins--Determines whether pre-join projections are created (true) or not (false). parameter_name . Example The following example configures a design that contains no pre-join projections: SELECT SET_DESIGN_PARAMETER('vmart'. Syntax SET_DESIGN_PARAMETER ( design_context_name . design_name . In this case. query_table_name ) -303- .'VMartDesign'. By default. Database Designer makes pre-joins as an optimization (true). sort_buddies_same--Determines whether the same sort order is used for buddy projections (true) or not (false). If you are using queries with the same sort order. bool_value ) Parameters design_context_name design_name parameter_name Specifies the name of the design context schema to update. each of which may favor one type of query. using buddy projections with the same sort order provides the least impact on performance if a node goes down because queries will not have to be resorted.false). One of the following: true false bool_value Notes By default. you would not want the buddy projections to use different sort orders. Syntax SET_DESIGN_QUERIES_TABLE ( design_context_name . Specifies the name of the design configuration to update. However. SET_DESIGN_QUERIES_TABLE Establishes the specified table as the query input table for the physical schema design.

SELECT SET_DESIGN_QUERIES_TABLE('vmart'. int_cluster_level ) Parameters design_context_name design_name int_cluster_level Specifies the name of the design context schema to update. Syntax SET_DESIGN_QUERY_CLUSTER_LEVEL ( 'design_context_name' .'vmart_query_input'). Specifies the name of the query input table to use for the design configuration. The query cluster level can be any integer from one (1) to the number of queries to be included in the physical schema design. The default is one (1). This enables you to work on a design even if you do not have access to files on the database server. and establish it as the query input table for the specified design. Specifies the number of query clusters for the design.SQL Reference Manual Parameters design_context_name design_name query_table_name Specifies the name of the design context schema to modify. See Also CREATE_DESIGN_QUERIES_TABLE (page 257). Specifies the name of the design configuration to modify. load it with data from the query repository. Notes A query cluster level determines the number of sets used to group similar queries. Notes This function is used in conjunction with CREATE_DESIGN_QUERIES_TABLE (page 257) and LOAD_DESIGN_QUERIES (page 282) to create a new query input table.'VMartDesign'. -304- . RESET_DESIGN_QUERIES_TABLE (page 293) SET_DESIGN_QUERY_CLUSTER_LEVEL Changes the number of query clusters used to the number specified. 'design_name' . Example The following example sets the query input table for the VMartDesign with the vmart_query_input table. Specifies the name of the design configuration to update.

SQL Functions Queries are generally grouped based on the columns they access and the way in which they are used. See Also CLEAR_DESIGN_SEGMENTATION_TABLE (page 249) -305- . there would be at least two (2) query clusters. For example. Example The following example specifies that product_price column in the Public. SELECT SET_DESIGN_SEGMENTATION_COLUMN('vmart'.'VMartDesign'.'Public.3).product_dimension '. 'table_name' . You can specify columns on the same table (which will automatically be combined) or other tables. Specifies the name of column to use for segmentation. SET_DESIGN_SEGMENTATION_COLUMN Specifies a column to use as the segmentation column for the table. Examples The following example creates a design with three (3) query clusters: SELECT SET_DESIGN_QUERY_CLUSTER_LEVEL('vmart'.'VMartDesign'. Syntax SET_DESIGN_SEGMENTATION_COLUMN ( 'design_context_name' . make additional calls to the SET_DESIGN_SEGMENTATION_COLUMN function.'product_price'). To specify more than one column. 'column_name' ) Parameters design_context_name design_name table_name column_name Specifies the name of the design context schema to modify. the reporting tool is likely to use a drill down to access a subset of data and the dashboard is likely to use a large aggregation to look across a large range of data. The following work loads typically use different types of queries and are placed in different query clusters: drill downs.product_dimension table be used to segment the table. Use a segmentation column that has a high number of distinct values. large aggregations. 'design_name' . if a reporting tool and dashboard both access the same database. In this case. Specifies the name of the design table that contains the column to use for segmentation. rather than relying on Database Designer to choose it automatically. Specifies the name of the design configuration to modify. Notes • • The segmentation column is used to segment projection across nodes. and large joins.

'store. Syntax SET_DESIGN_TABLE_ROWS ( design_context_name . Specifies the table within the design context to modify.'store. num_rows ) Parameters design_context_name table_name num_rows Specifies the name of the design context schema to modify. SELECT SET_DESIGN_SEGMENTATION_TABLE( 'vmart'. Examples The following examples both mark the store.SQL Reference Manual SET_DESIGN_SEGMENTATION_TABLE Determines whether or not the specified table will be segmented. table_name .store_orders_fact table within the vmart design context for segmentation: SELECT SET_DESIGN_SEGMENTATION_TABLE( 'vmart'. One of the following: true (to segment the table) false The default is true if left unspecified.store_orders_fact'.true ). SET_DESIGN_TABLE_ROWS Sets the number of anticipated rows for the specified table. table_name .store_orders_fact' ). Notes • The table specified by the entry is marked as requesting segmentation. bool_value ) Parameters design_context_name table_name bool_value Specifies the name of the design context schema to modify. Specifies the number of anticipated rows for the table. Syntax SET_DESIGN_SEGMENTATION_TABLE ( design_context_name . -306- . Specifies the entry within the design context to modify.

000. The throughput must be 1 or more. SET_LOCATION_PERFORMANCE Sets disk performance for the location specified. Syntax SET_LOCATION_PERFORMANCE ( path . If this parameter is omitted. throughput .store_orders_fact'. Database Designer uses this information to optimize the design for that number of rows. SELECT MEASURE_LOCATION_PERFORMANCE('node2'. Example The following example sets the anticipated number of rows for the store. node defaults to the initiator.store_orders_fact entry within the vmart design context to five million (5.SQL Functions Notes • • If the number of table rows will be an order of magnitude larger or smaller than the sample provided for data statistics. SELECT SET_DESIGN_TABLE_ROWS('vmart'.'122'. use this function to supply the anticipated number of rows.5000000). avg latency Specifies the average latency for the location. avg latency ) Parameters node Is the Vertica node where the location to be set is available. Specifies where the storage location to set is mounted.'store. run the MEASURE_LOCATION_PERFORMANCE (page 286) function before attempting to set the location's performance. An approximation is okay. Example The following example sets the performance of a storage location on node2 to a throughput of 122 MB/second and a latency of 140 seeks/second. The avg_latency must be 1 or more. -307- .000). Notes To obtain the throughput and avg latency for the location.'/secondVerticaStorageLocation/'.'1 40'). path throughput Specifies the throughput for the location. node .

then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers. the query optimizer might not choose the new projection for AT EPOCH queries that request historical data at epochs older than the refresh epoch of the projection. If a projection is updated from scratch. see the PROJECTION_REFRESHES (page 446) and PROJECTIONS (page 416) SQL system tables. As a result. START_REFRESH Transfers data to projections that are not able to participate in query execution due to missing or out-of-date data.SQL Reference Manual See Also • • ADD_LOCATION (page 239) and MEASURE_LOCATION_PERFORMANCE (page 286) in this guide. PROJECTION_REFRESHES (page 446) and PROJECTIONS (page 416) -308- . Syntax START_REFRESH() Notes • • All nodes must be up in order to start a refresh. • • • • • See Also MARK_DESIGN_KSAFE (page 285). A refresh can start when § A new projection is created on tables that already have data § The newly-added projection has an existing. the refreshed projections go into a single ROS container. Shutting down the database ends the refresh. To view the progress of the refresh. up-to-date buddy projection or a newly-added buddy projection § The MARK_DESIGN_KSAFE (page 285) function is called to validate the K-safety of the design START_REFRESH has no effect if a refresh is already running. the data stored in the projection represents the table columns as of the epoch in which the refresh commits. If the table was created with a PARTITION BY clause. Projections refreshed from buddies retain history and can be used to answer historical queries. Measuring Location Performance and Setting Location Performance in the Administrator's Guide. After a refresh completes. since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container.

In this case. Notes • • If you specify a table. Vertica returns an error if the design context does not exist. design_name ) Parameters design_context_name design_name Specifies the name of the design context in which to create the design. This function returns a script that contains all the SQL queries it generates. Notes • This is useful for capturing a deployed design. This query is in the form: CREATE PROJECTION fact_super ( -309- . Syntax SYNC_CURRENT_DESIGN( design_context_name . perhaps. the database administrator would call SYNC_CURRENT_DESIGN and then. export the design for the designer to work on elsewhere. Vertica generates a script for the table whether or not it has a safe super projection.SQL Functions SYNC_CURRENT_DESIGN Creates a physical schema design with the projections from a deployed design. Specifies the name of the design to create. If you specify an empty string instead of a table name. This allows the user to load the deployed design into designer system tables where it can be modified through the standard Database Designer API. For example. If an empty string is provided instead of a table name. Vertica generates scripts only for tables that do not already have a safe super projection. Vertica includes all the tables in the schema in the design. Syntax TEMP_DESIGN_SCRIPT ( 'table_name' ) Parameters table_name Specifies the name of the table to include in the design. • See Also UPDATE_DESIGN (page 310) TEMP_DESIGN_SCRIPT Generates a SQL query that you can use to create a temporary projection. a database administrator might want a designer to modify a deployed design.

• • Example The following example creates an incremental design based on a comparison of VMartDesign and NewVMartDesign. Specifies the name of the original base design. Once the incremental design is generated. To create an incremental design. § Generate an incremental design that contains the difference between the two designs. Parameters design_context_name base_design_name updated_design_name Specifies the name of the design context schema in which to update a design. (This includes applying queries from the base design that are anchored on these tables (if they are eligible anchor tables). 'updated_design_name' ) 'base_design__name' . you deploy it on top of the original design to implement your modifications.. § If the query input table for the new design is not empty. use \o to redirect the results to a file. . -310- . § Adds projections for newly added design tables. Notes • To update a design. Specifies the name of the new design. Syntax UPDATE_DESIGN ( 'design_context_name' . Database Designer § Excludes base design projections for design tables that have been removed. creating projections optimized for these queries.SQL Reference Manual column-list. the general process is to create a new design and then use the UPDATE_DESIGN function to: § Compare the original design to the new design. Tip: To create a file that contains the query.) § Excludes or adds projections as appropriate if there is a difference in k-safety between the base and new designs. ) AS SELECT * FROM fact UNSEGMENTED ALL NODES. UPDATE_DESIGN Generates an updated physical schema design that contains the differences between two designs..

SQL Functions SELECT UPDATE_DESIGN('vmart'.'NewVMartDesign'). Syntax SELECT WAIT_DEPLOYMENT() Notes This function is useful for using a script to deploy a design. It will not return until the deployment is complete.'VMartDesign'. See Also RUN_DEPLOYMENT (page 296) and DEPLOY_DESIGN (page 259) -311- . See Also SYNC_CURRENT_DESIGN (page 309) WAIT_DEPLOYMENT Returns zero (0) when the deployment is complete.

.

. Multiple statements are separated by semicolons...... state VARCHAR2 NOT NULL. .. date_col date NOT NULL.)..SQL Statements The primary structure of a SQL query is its statement..). for example: CREATE TABLE fact ( . -313- . CREATE TABLE fact(. ..

The RENAME TO parameter is applied atomically. The new schema names must not already exist. Specifies one or more new schema names. If. When renaming schemas. See Also • • • Use DO_TM_TASK (page 263) to run a Tuple Mover operation (moveout) on one or more projections defined on a specified table. schemaname3 ]{ RENAME TO new-schema-name1 [. new-schema-name2. Syntax ALTER SCHEMA schemaname1 [. new-schema-name3 ]} Parameters schemaname RENAME TO Specifies the name of one or more schemas to rename. schemaname2. Either all the schemas are renamed or none of the schemas are renamed. The lists of schemas to rename and the new schema names are parsed from left to right and matched accordingly using one-to-one correspondence. ALTER SCHEMA Renames one or more existing schemas. Changes the name of the projection to the specified name.SQL Reference Manual ALTER PROJECTION Initiates a rename operation on the specified projection: Syntax ALTER PROJECTION projection { | RENAME TO new-projection-name } Parameters projection RENAME TO Specifies the name of a projection. be sure to follow these standards: The number of schemas to rename must match the number of new schema names supplied. See Purging Deleted Data in the Administrator's Guide for more information about purge operations. See Understanding the Automatic Tuple Mover in the Administrator's Guide for an explanation of how to use manual Tuple Mover control. for example. the number of schemas to rename -314- .

To facilitate the swap. enter a non-existent. Renaming schemas does not affect existing prejoin projections because prejoin projections refer to schemas by the schemas' unique numeric IDs (OIDs). S4. temporary placeholder schema. See Also CREATE SCHEMA (page 344) and DROP SCHEMA (page 360) -315- . temps is renamed to S2. ALTER SCHEMA S1. Tip Renaming schemas is useful for swapping schemas without actually moving data. In this example. S1 is renamed to temps. and the OIDs for schemas are not changed by ALTER SCHEMA. The following example uses the temporary schema temps to facilitate swapping schema S1 with schema S2. Examples The following example renames schema S1 to S3 and schema S2 to S4: ALTER SCHEMA S1. Note: Renaming a schema that is referenced by a view will cause the view to fail unless another schema is created to replace it. Finally. S2.SQL Statements does not match the number of new names supplied. none of the schemas are renamed. S2. S1. S2 RENAME TO S3. temps RENAME TO temps. Notes • • Only the superuser or schema owner can use the ALTER SCHEMA command. Then S2 is renamed to S1.

When using ALTER TABLE to rename one or more tables. you can specify a comma-delimited list of table names to rename. new-table-name3 ] Parameters [schema-name. Volatile functions cannot be specified through ADD COLUMN.]table-name { ADD COLUMN column-definition (on page 348) | ADD table-constraint (on page 319) | ALTER COLUMN column_name [ SET DEFAULT default_expr ] | [ DROP DEFAULT ] | DROP CONSTRAINT constraint-name [ RESTRICT | CASCADE ] | RENAME [ COLUMN ] column TO new_column | SET SCHEMA new-schema-name [ CASCADE | RESTRICT ] } Syntax 2 ALTER TABLE [schema-name. Note: Columns added to a table that is referenced by a view will not appear in the result set of the view even if the view uses the wild card (*) to represent all columns in the table. [schema-name. specify the schema that contains the table.]table-name Specifies the name of the table to be altered. Recreate the view to incorporate the column. ADD COLUMN column-definition ADD table-constraint Note: Adding a table constraint has no effect on views that reference the table. When a new column is added. A unique projection column name is generated in each superprojection. When using more than one schema. the default value is inserted for existing rows. Adds a new column to a table and to all superprojections of the table. all rows will have the current timestamp.316 ALTER TABLE Modifies an existing table. ALTER TABLE can be used in conjunction with SET SCHEMA to move only one table between schemas at a time.]table-name3] RENAME [TO] new-table-name1 [. For example. Use ALTER COLUMN to specify volatile functions.]table-name1 [. -316- . [schema-name. Adds a table-constraint (on page 319) to a table that does not have any associated projections. Syntax 1 ALTER TABLE [schema-name. if current_timestamp is the default expression.]table-name2. The column is populated according to the column-constraint (on page 349). new-table-name2 .

T1. ALTER TABLE S1. Drops the specified table-constraint (on page 319) from the table. U2.. To work around this. S1.U2. Either all the tables are renamed or none of the tables are renamed. The schema-name is specified only after the ALTER TABLE clause because this statement applies to only one schema. use a comma-delimited list. If. the key word changes the name of the table or tables to the specified name or names. The new table names must not already exist. When renaming tables. be sure to follow these standards: Do not specify the schema-name as part of the table specification after the RENAME TO clause. the number of tables to rename does not match the number of new names supplied. The following example renames tables T1 and T2 in the S1 schema to U1 and U2 respectively. while RENAME is used to rename multiple tables. you cannot specify a volatile function in its default column expression.U1. Tip: When adding a column.T2 RENAME TO S1. -317- . The number of tables to rename must match the number of new table names supplied. for example. RENAME [TO] new-table-name1 .T1. The RENAME TO parameter is applied atomically. RENAME TO is used to rename one table. To rename two or more tables simultaneously. S1. Note: Renaming a table that is referenced by a view will cause the view to fail unless another table is created to replace it. For example: ALTER TABLE tbl ADD COLUMN newcol float. S1. In either case.SQL Statements ALTER COLUMN column_name [ SET DEFAULT default_expr ] | [ DROP DEFAULT ] Alters an existing column within the specified table to change or drop a default expression. ALTER TABLE tbl ALTER COLUMN newcol SET DEFAULT random().. The lists of tables to rename and the new table names are parsed from left to right and matched accordingly using one-to-one correspondence. The following example generates a syntax error: ALTER TABLE S1. For example. Then. add the column and fill it with NULLS. none of the tables are renamed.T2 RENAME TO U1. Use the CASCADE keyword to drop a constraint upon which something else depends. alter the column to specify the volatile function. a FOREIGN KEY constraint depends on a UNIQUE or PRIMARY KEY constraint on the referenced columns. DROP CONSTRAINT constraint-name [ RESTRICT | CASCADE ] Note: Dropping a table constraint has no effect on views that reference the table.

the statement rolls back and the tables and projections are not moved. ADD COLUMN on a temporary table. Renaming tables does not affect existing prejoin projections because prejoin projections refer to tables by the tables' unique numeric IDs (OIDs). They are exclusive: § RENAME [TO] § RENAME COLUMN § SET SCHEMA § ADD COLUMN The ADD constraints and DROP constraints clauses can be used together. With the exception of performing a table rename.. You cannot use ALTER TABLE . Recreate the view to incorporate the column's new name. Note: If a column that is referenced by a view is renamed. Notes: Although this is likely to occur infrequently. SET SCHEMA supports moving only one table between schemas at a time. you must also have CREATE privilege on the schema to which you want to move the table. Adding a column to a table does not affect the K-Safety of the physical schema design. to add multiple columns. the column will not appear in the result set of the view even if the view uses the wild card (*) to represent all columns in the table. Notes • • • To use the ALTER TABLE statement. In the new schema. issue consecutive ALTER TABLE ADD COLUMN commands.SQL Reference Manual RENAME [ COLUMN ] column TO new_column Renames the specified column within the table. rename the table or projections that conflict with the ones that you want to move and then rerun the statement.. Vertica supports moving system tables to system schemas if necessary. • • • • • -318- . If you use SET SCHEMA. If the name of the table or any of the projections that you want to move already exists in the new schema. Temporary tables cannot be moved between schemas. use the RESTRICT key word. and the OIDs for tables are not changed by ALTER TABLE. for example. SET SCHEMA new-schema_name [ CASCADE | RESTRICT ] Moves the table to the specified schema. SET SCHEMA is set to CASCADE. Vertica allows adding 1600 columns to a table. By default. The following clauses cannot be used with any other clauses. This might occur to support designs created through Database Designer. one operation can be performed at a time in an ALTER TABLE command. To move only projections that are anchored on this table and that reside in the same schema. the user must either be a superuser or be the table owner and have CREATE privilege on the affected schema. This means that all the projections that are anchored on this table are automatically moved to the new schema regardless of the schema in which they reside.

The following example drops a table constraint to the Product_Dimension table within the Retail schema. T1. ALTER TABLE Retail. Examples The following example adds a table constraint to the Product_Dimension table within the Retail schema. The following example drops the default expression specified for the Discontinued_flag column. . The following example uses the temporary table temps to facilitate swapping table T1 with table T2.Product_Dimension RENAME COLUMN Product_description TO Item_description.Product_Dimension ADD CONSTRAINT PK_Product_Dimension PRIMARY KEY (Product_Key). Finally. ] ) | FOREIGN KEY ( column [ . ALTER TABLE T1. ALTER TABLE Retail.T1 SET SCHEMA S2. T1 is renamed to temps.Product_Dimension table from Product_description to Item_description: ALTER TABLE Retail. In this example. It cannot be used to swap tables across schemas.. T2.. temps RENAME TO temps. The following example moves table T1 from schema S1 to schema S2. temporary placeholder table. .. ] ) -319- .SQL Statements Tip Renaming tables is useful for swapping tables within the same schema without actually moving data. Syntax [ CONSTRAINT constraint_name ] { PRIMARY KEY ( column [ . T2. temps is renamed to T2. use a non-existent. ALTER TABLE Retail. ] ) REFERENCES table | UNIQUE ( column [ . The following example renames a column in the Retail. To enable the swap. ..Product_Dimension ALTER COLUMN Discontinued_flag DROP DEFAULT..Product_Dimension DROP CONSTRAINT PK_PRODUCT_Dimension. SET SCHEMA defaults to CASCADE so all the projections that are anchored on table T1 are automatically moved to schema S2 regardless of the schema in which they reside ALTER TABLE S1. table-constraint Adds a join constraint or a constraint describing a functional dependency to the metadata of a table.. Then T2 is renamed to T1. See Adding Constraints in the Administrator's Guide.

] ) FOREIGN KEY ( column [ . the description "Seafood Product 1" exists only in the "Seafood" category. • • • Examples CORRELATION (Product_Description) DETERMINES (Category_Description) The Retail Sales Example Database described in the Getting Started Guide contains a table Product_Dimension in which products have descriptions and categories. the default is the primary key of table. For example. for example: CREATE TABLE fact(c1 INTEGER PRIMARY KEY). Adds a referential integrity constraint defining one or more NOT NULL numeric columns as a foreign key. If column is omitted. Define PRIMARY KEY and FOREIGN KEY constraints in all tables that participate in inner joins. . Use the ALTER TABLE (page 316) command to add a table constraint. . ] ) CORRELATION Notes • A foreign key constraint can be specified solely by a reference to the table that contains the primary key.. Given a tuple and the set of values in column1. Vertica recommends that you name all constraints. You can define several similar correlations between columns in the Product Dimension table. Specifies the table to which the FOREIGN KEY constraint applies. The columns in the referenced table do not need to be explicitly specified. The CREATE TABLE statement does not allow table constraints. -320- ...SQL Reference Manual | CORRELATION ( column1 ) DETERMINES ( column2 ) } Parameters CONSTRAINT constraint-name PRIMARY KEY ( column [ . See Adding Join Constraints. . CREATE TABLE dim (c1 INTEGER REFERENCES fact). UNIQUE ( column [ . ] ) REFERENCES table Optionally assigns a name to the constraint. Adding constraint to a table that is referenced in a view does not affect the view.. Ensures that the data contained in a column or a group of columns is unique with respect to all the rows in the table. you can determine the corresponding value of column2.. Describes a functional dependency. Adds a referential integrity constraint defining one or more NOT NULL numeric columns as the primary key..

An md5 encryption scheme is used Is the password to assign to the user. Syntax ALTER USER name [ WITH [ ENCRYPTED | UNENCRYPTED ] PASSWORD password ] Parameters name ENCRYPTED password Specifies the name of the user to alter. Is the default.321 ALTER USER Changes a database user account. -321- . names that contain special characters must be double-quoted.

Syntax COMMIT [ WORK | TRANSACTION ] Parameters WORK | TRANSACTION Have no effect. they are optional keywords for readability.322 COMMIT Ends the current transaction and makes all changes that occurred during the transaction permanent and visible to other users. -322- .

.. BZIP) from multiple named pipes or files and inserts records into the WOS (memory) or directly into the ROS (disk).[ REJECTED DATA 'pathname' [ ON nodename ] [. When using more than one schema..[ NO COMMIT ] Parameters [schema-name.. (See LCOPY (page 374) to load from a data file on a client system.. for which you want to compute values....[ NULL [ AS ] 'string' ] ..... .[FILLER datatype] ......[ NULL [ AS ] 'string' ] .] ] ..[ DELIMITER [ AS ] 'char' ] .] } . Vertica loads the data into all projections that include columns from the schema table. .[ FORMAT 'format' ] ....[ ABORT ON ERROR ] . (See Transforming Data in the Administrator's Guide.... Syntax COPY [schema-name. . specify the schema that contains the table.]table [ ( [ Column as Expression ] / column .[ BZIP | GZIP | UNCOMPRESSED ] | 'pathToData' [ ON nodename ] [ BZIP | GZIP | UNCOMPRESSED ] [.. Specifies the target column..[ DELIMITER [ AS ] 'char' ] .) COPY reads data optionally compressed (GZIP.......] ] .......[ DIRECT ] ..... This is used to transform data when it is loaded into the target database.[ SKIP n ] ...[ RECORD TERMINATOR 'string' ] . as an expression.[ STREAM NAME ] .[ ESCAPE AS 'char' ] ...[ ESCAPE AS 'char' ] .[ REJECTMAX n ] ..] ) ] FROM { STDIN .323 COPY Is designed for bulk loading data from a file on a cluster host into a Vertica database.. Transforming data is useful for computing values to be inserted into a column in the target database from other columns in the source. Note: You must connect as the database superuser in order to COPY from a file.[ ...[ ENCLOSED BY 'char' ] .[ ENCLOSED BY 'char' ] .[ WITH ] .) Transformation Requirements Column as Expression -323- .]table Specifies the name of a schema table (not a projection)...[ EXCEPTIONS 'pathname' [ ON nodename ] [..

string functions. FORMAT cannot be specified for a computed column. formatting functions. null value functions. Transformation Usage If nulls are specified in the raw data for parsed columns in the source. analytic. NULLs. A copy expression can be as simple as a single column and may be as complex as a case expression with multiple columns. The return data type of the expression must be coercible to that of the target column. (See Ignoring Columns and Fields in the Load File in this guide and the COPY (page 323) statement in the SQL Reference Guide for more information about using fillers. which can be a filler column. as follows: date time functions. evaluation will follow the same rules as for expressions within sql statements. Copy expressions may contain only constants. When there are computed columns. Transformation Restrictions Computed columns cannot be used in copy expressions.) For parsed columns. all parsed columns in the expression must be listed in the COPY statement. numeric functions. and comments.SQL Reference Manual The copy statement must contain at least one parsed column. operators. specify only raw data in the source. Multiple copy expressions can refer to the same parsed column. Parsed and computed columns can be interspersed in the COPY statement. and aggregate -324- . Copy expressions can be specified for columns of all supported data types. constants. Raw data cannot be specified in the source for computed columns. Multiple columns can be specified in a copy expression. and system functions COPY expressions cannot use the following SQL functions: meta (Vertica functions). Parameter (parsed columns) will also be coerced to match the expression. COPY expressions can use most Vertica-supported SQL functions.

untransformed source column (parsed column). in a table T1 with nine columns (C1 through C9). C9) would load the three columns of data in each record to columns C1. Transforming data from a source column and then loading the transformed data to a destination table without loading the original.SQL Statements column Restricts the load to one or more specified columns in the table. Filler restrictions: The source columns in a COPY statement cannot consist of only filler columns. For example. Filler usage: Expressions can contain filler columns. Note: The data file must contain the same number of columns as the COPY command's column list. If no columns are specified. COPY T1 (C1. The filler column must be a parsed column. FILLER datatype -325- . and C9. C6. all columns are loaded by default. not a computed column. Table columns that are not in the column list are given their default values. A data file can consist of only filler columns. There is no implicit casting during parsing. For parsed columns. The name of the filler column must be unique across the source file and target table. respectively. COPY inserts NULL. C6. This means that all data in a data file can be loaded into filler columns and then transformed and loaded into table columns. This is useful for: Omitting columns that you do not want to transfer into a table.) Filler requirements: The datatype of the filler column must be specified. All parser parameters can be specified for filler columns. Target table columns cannot be specified as filler whether or not they appear in the column list. All statement level parser parameters apply to filler columns. (See Transforming Data in the Administrator's Guide. If no default value is defined for a column. specify only raw data in the source. Instructs Vertica not to load a column and the fields it contains into the destination table. There is no restriction on the number of filler columns that can be used in a copy statement (other than at least one column must not be a filler column). so mismatched data types will cause the COPY to roll back and the row to be rejected.

Are for readability and have no effect. STDIN takes one input source only and is read on the initiator node. Note: Nodename cannot be specified with STDIN because STDIN is read on the initiator node only. then all qualifying input files should be of the same format.SQL Reference Manual FORMAT 'format' Is specified for date. Refer to the following links for supported formats: Template Patterns for Date/Time Formatting (page 171) Template Pattern Modifiers for Date/Time Formatting (page 174) Specifies the absolute path of the file containing the data. the COPY operation returns an error. use pathToData. Reads from the client a standard input instead of a file. Input files can be of any format. Note: A comma (. The supported patterns for wildcards are specified in the Linux Manual Page GLOB(7). control) characters as a delimiter. for example: TO_DATE('05 Dec 2000'. which can be from multiple input sources. Globbing pathnames http://man-wiki. UNCOMPRESSED is the default. Also use (\) to specify special (non-printing. text). 'DD Mon YYYY') If you specify invalid format strings.net/index.php/7:glob. If omitted. operations default to the query's initiator node. The default in Vertica is a vertical bar (|). time. To load multiple input sources. If the delimiter character appears in string of data values. use the ESCAPE AS (\) character to indicate that it is a literal. pathToData nodename STDIN BZIP|GZIP|UNCOMPRESSED WITH AS DELIMITER 'char' Is the single ASCII character that separates columns within each record of a file. Note: When using concatenated BZIP or GZIP files. Supported date/time formats are the same as those accepted by TO_DATE(text.) is the delimiter commonly used in CSV data files. Path can optionally contain wildcards to match more than one file. such as the tab character ('\t'). The file or files must be accessible to the host on which the COPY statement runs. Quote (") is allowed if you set ENCLOSED BY to single -326- . You can use variables to construct the pathname as described in Using Load Scripts. be sure that each source file is terminated with a record terminator before you concatenate them. If wildcards are used. See Loading Character Data. Is optional. and binary data types.

double quotes are used for illustration purposes only: An enclosed string is detected if the string starts with double quotes ("). thus. ~'~ In the following list.". Double quotes is the most common: ENCLOSED BY '"' Turn ENCLOSED BY off using the following command: ENCLOSED BY '' You can use ENCLOSED BY to embed delimiter character string values. columns are distributed as follows: "1". Any " in the middle of an enclosed string.SQL Statements quotes and specify the delimiter as double quotes. You can use the escape character to escape the record terminator.'. '. The following input. "vertica. Sets the quotes character and allows delimiter characters to be embedded in string values. value" Column 1 contains "vertica Column 1 contains value" Notice the double quotes before vertica and after value. returns "vertica" "\"vertica\"" Using the following sample input. For example. ~. ". must include an escape (backslash \) character. ''' ~1~.value Column 3 contains . '"' ESCAPE AS 'char' Sets the escape character. The end of the string is. and the escape character itself. The default is empty with ENCLOSED BY off. to include a quote character in a value. delimiter. for example. Column 4 contains ' You could also write the above example using single quotes. An enclosed string ends with a second occurrence of " with an optional adjacent delimiter. for example. leading spaces are optional. The default is backslash (\).value". It is assumed that the string is not enclose if the string does not start with " . " in the middle of a string has no effect. use the ESCAPE AS character. for example. Turn ENCLOSED BY on by specifying any single ASCII character. ~vertica. " ENCLOSED BY 'char' -327- . given the following input: "vertica.value'. or any ASCII character of your choosing: '1'. on the other hand. "'" Column 1 contains 1 Column 2 contains vertica.~.value~. enclosed by character. 'vertica.

92 record in the delimiter -328- .SQL Reference Manual or "| (where | is assumed to be the delimiter).. You can include non-printing characters and backslash characters in the string according to the following convention: Sequenc e \0 Description Abbreviati on NULL BEL BS HT LF VT FF CR ASCII Decimal 0 7 8 9 10 11 12 13 Null character \a Bell \b Backspace \t Horizontal tab \n Linefeed \v Vertical tab \f Formfeed \r Carriage return \\ Backslash Note: You cannot use the string. the scripts used to load the example databases contain: COPY . if the null string is NULL and the delimiter is the vertical bar (|): |NULL| indicates a null value. NULL 'string' The string that represents a null value. NULL '\\n' . any data item that matches this string is stored as a null value.. regardless of the contents. For example. When you use the COPY command in a script.. An enclosed string does not match a NULL string. so make sure that you use the same string that you used with COPY TO. | NULL | does not indicate a null value. For example. To input an empty or literal string. you must substitute a double-backslash for each null string that includes a backslash. RECORD TERMINATOR 'string' Specifies the literal character string that indicates the end of a data file record. The default is an empty string (''). whichever occurs first. Note: When using COPY FROM. use quotes (ENCLOSED BY).. for example: NULL '' NULL 'literal' The null string is case-insensitive and must be the only value between the delimiters.

If more than one is provided. then the system returns an error. Note: Vertica does not accumulate rejected records across files or nodes while the data is loading. Note: Filename is required because of multiple input files.SQL Statements SKIP REJECTMAX Skips the first 'n' records in each file in a load. The format for the EXCEPTIONS file is: COPY: Input record <num> in <pathofinputfile> has been rejected (<reason>). If copying from STDIN. all information is stored as one file in the default directory. Vertica returns an error. the load fails. If path is not a directory. If there are multiple data files. path is treated as a file with all information stored in this file. one for each data file in default directory. If path is not a file. \tmp\<shorter-file-name>. If exception files are specified: If there is one data file. If there are multiple data files. one for each data file in this directory. Please see <pathtorejectfile>. for example. which is useful if you want to omit table header information. If exceptions files are not specified: If there is one data source file (pathToData or STDIN). If not specified or if value is 0. record <recordnum> for -329- . The default pathname is: <Catalog dir>/CopyErrorLogs/<tablename>-<filename of source>-copy-from-exceptions <Catalog dir>represents the directory in which the database catalog files are stored. When the number of rejected records becomes equal to the value specified for REJECTMAX. path is treated as a directory with all information stored in separate files. If one file exceeds the maximum reject number. specify a path for the exceptions file that is different from the default path. then the system returns an error. Exceptions files are not shipped to the initiator node. Only one pathname per node is accepted. Sets an upper limit on the number of logical records to be rejected before a load fails. the entire load fails. To work around this limitation. REJECTMAX allows an unlimited number of rejections. EXCEPTIONS 'pathname' Specifies the filename or absolute pathname in which to write messages indicating the input line number and the reason for each rejected data record. Also. the <filename of source> is STDIN. all information is stored as separate files. The limit is one less than the value specified for REJECTMAX. and <tablename>-<filename of source> are the names of the table and data file. long table names combined with long data file names can exceed the operating system's maximum length (typically 255 characters).

For example. then the system returns an error. Is the optional identifier that names a stream. If there are multiple data files. No data is loaded. stream names would appear as A-f1. then the system returns an error. If rejected data files are not specified: If there is one data source file (pathToData or STDIN). which could be useful for quickly identifying a particular load. This file can then be edited to resolve problems and reloaded. A-f2. \tmp\<shorter-file-name>. path is treated as a directory. If path is not a directory. By default. Vertica names streams by table and file name. specify a path for the rejected data file that is different from the default path. one for each data file in default directory. with all information stored in separate files. for example. If copying from STDIN. etc.SQL Reference Manual the rejected record. data goes to the WOS (Write Optimized Store). If rejected data files are specified: If there is one data file. If path is not a file. Specifies that the data should go directly to the ROS (Read Optimized Store. path is treated as a file with all information stored in this file. Also. the <filename of source> is STDIN. and <tablename>-<filename of source> are the names of the table and data file. one for each data file in this directory. ABORT ON ERROR Stops the COPY command if a row is rejected and rolls back the command. Vertica returns an error. DIRECT STREAM NAME -330- . all information is stored as one file in the default directory. long table names combined with long data file names can exceed the operating system's maximum length (typically 255 characters). If more than one is provided. STREAM NAME appears in the stream column of the LOAD_STREAMS (page 440) table. all information is stored as separate files. Notes: Filename is required because of multiple input files. To work around this limitation. If there are multiple data files. if you have two files (f1. Rejected data files are not shipped to the initiator node. By default. Only one pathname per node is accepted. f2) in Table A. REJECTED DATA 'pathname' Specifies the filename or absolute pathname in which to write rejected rows. The default pathname is: <Catalog dir>/CopyErrorLogs/<tablename>-<filename of source>-copy-from-data <Catalog dir>represents the directory in which the database catalog files are stored.

.. Vertica checks for violations when queries are executed. while COPY FROM <file> is an admin-only operation.. load data without committing it and then perform a post-load check of your data using the ANALYZE_CONSTRAINTS (page 241) function. The COPY FORMAT keyword significantly improves performance for loading DATE data types. -331- . For example. NO COMMIT adds rows to the same transaction as the earlier statements. In other words..SQL Statements Use the following statement to name a stream: COPY <mytable> FROM <myfile> DELIMITER '|' DIRECT STREAM NAME 'My stream name'. Tip: Use the NO COMMIT keywords to incorporate detection of constraint violations into the load process. NO COMMIT. quote characters are treated as ordinary data. NO COMMIT Use COPY with the NO COMMIT key words to prevent the current transaction from committing automatically (default behavior for all but temporary tables). Do not enclose character strings in quotes. not when data is loaded. COPY.. Notes • • • • • • • COPY FROM STDIN is allowed to any user granted the INSERT privilege. NO COMMIT can be combined with any other existing COPY option.. you can easily roll back the load because you have not committed it. To avoid constraint violations. NO COMMIT. The COPY command automatically commits itself and any current transaction unless NO COMMIT is specified and unless the tables are temp tables. COMMIT. COPY. This option is useful for executing multiple COPY commands in a single transaction. Vertica recommends that you COMMIT (page 322) or ROLLBACK (page 379) the current transaction before using COPY. Invalid input is defined as: § Missing columns (too few columns in an input line). If the function finds constraint violations. The previous statements are NOT committed. You cannot use the same character in both the DELIMITER and NULL strings. String data in load files is considered to be all characters between the specified delimiters.. all the rows in the following sequence commit in the same transaction. § Extra columns (too many columns in an input line).. COPY. COPY. If there is a transaction in progress initiated by a statement other than COPY (for example. NULL values are not allowed for columns with primary key or foreign key referential integrity constraints. NO COMMIT. and all the usual transaction semantics apply. INSERT).

ENCLOSED BY.SQL Reference Manual • • • • • • Empty columns for INTEGER or DATE/TIME data types. § Examples The following examples specify format. the record number is incremented. CREATE TABLE t(oct VARBINARY(5). the default is used.customer_dimension (customer_since FORMAT 'YYYY') DELIMITER '. the byte sequence {0x61. Empty values (two consecutive delimiters) are accepted as valid input data for CHAR and VARCHAR data types. it is neither inserted nor rejected.separator") Named pipes are supported.store_dimension FROM :input_file DELIMITER '|' NULL '' RECORD TERMINATOR '\f' COPY a FROM stdin DELIMITER '. The following can be specified on either a statement or per column basis: DELIMITER.getProperty("line. Empty columns are stored as an empty string ('').0x65}. ESCAPE AS. you need to specify RECORD TERMINATOR '\r\n'. COPY does not use the default data values defined by the CREATE TABLE command. If you return a list of rejected records and one empty row was encountered during load. Vertica recommends that you use use the following value for the RECORD TERMINATOR System. and close. null and enclosed by strings. COPY public. If neither a column level nor statement level parameter is specified.0x62.' NULL '\\\N' DIRECT. delimiter. § Incorrect representation of data type.0x63. The same rules apply whether the parameter is specified at the statement or column level. However. and NULL. Bear this in mind when evaluating lists of rejected records. If no column level parameter is specified. Permissions are open. -332- . which is not equivalent to a null string. the statement level parameter is used. Loading Binary Data In the following example create a table that loads a different binary format for each column and insert the same value. For example. When an empty line is encountered during load.0x64. bitstring VARBINARY(5)). non-numeric data in an INTEGER column is invalid. If you are using JDBC. Naming conventions have the same rules as filenames on the given file system. the position of rejected records will be bumped up one position. If you are loading data from a Windows client.' NULL AS 'null' ENCLOSED BY '"' COPY store. hex VARBINARY(5). Column level parameters override statement level parameters. Cancelling a COPY statement rolls back all rows loaded by that statement. The default record terminator for COPY is now '\n'. write.

dat > pipe1 & COPY fact FROM :file delimiter '|'. SELECT * FROM fact. COMMIT. \!gunzip pf1.dat \! cat pf1. SELECT * FROM fact.SQL Statements Create the projection: CREATE PROJECTION t_p(oct. bitstring FORMAT 'bitstring') FROM STDIN DELIMITER '. Enter the data to be copied. which you end with a backslash and a period on a line by itself: 141142143144145. COMMIT. pipe1: \! mkfifo pipe1 \set dir `pwd`/ \set file '\'':dir'pipe1\'' The following sequence copies an uncompressed file from the named pipe: \! cat pf1.dat. oct | hex | bitstring -------+-------+----------abcde | abcde | abcde (1 row) Using Compressed Data and Named Pipes The following command creates the named pipe.dat. Note that the copy is from STDIN.gz > pipe1 & COPY fact FROM :file ON site01 GZIP delimiter '|'. bitstring) AS SELECT * FROM t. And now issue the SELECT statement to see the results: SELECT * FROM t.bz2 User-specified Exceptions and Rejected Data \set dir `pwd`/data/ -333- . COMMIT. hex FORMAT 'hex'. SELECT * FROM fact.dat. bunzip2 pf1. COPY t (oct FORMAT 'octal'. Issue the COPY command.bz2 > pipe1 & COPY fact FROM :file ON site01 BZIP delimiter '|'.dat.gz The following COPY command copies a BZIP file from named pipe and then uncompresses it: \!bzip2 pf1.dat \! cat pf1. not a file.'.0110000101100010011000110110010001100101 \. hex.0x6162636465. The following statement copies a GZIP file from named pipe and uncompresses it: \! gzip pf1.

:except_s2 on site02. with rejected data and exceptions referring to the directory on which the files reside: \set \set \set \set except_s1 reject_s1 except_s2 reject_s2 '\'':dir'\'' '\'':dir'\'' '\'':remote_dir'\'' '\'':remote_dir'\'' site01. A single file is on remote node: \set except_s2 '\'':remote_dir'exceptions\'' \set reject_s2 '\'':remote_dir'rejections\'' COPY fact FROM :file1 on site02 delimiter '|' REJECTED DATA :reject_s2 on site02 EXCEPTIONS :except_s2 on site02. the following inputs -334- . and the exceptions and rejected data are filenames instead of directories: \set except_s1 '\'':dir'exceptions\'' \set reject_s1 '\'':dir'rejections\'' COPY fact FROM :file1 on site01 delimiter '|' REJECTED DATA :reject_s1 on site01 EXCEPTIONS :except_s1 on site01. Reject/Exception files SPECIFIED. Multiple data files on multiple nodes. For example. site01.dat\'' '\'':remote_dir'C4_rej. site02 on site01.dat\'' on on ON ON site01.dat\'' '\'':dir'C2_rej. COPY fact FROM :file1 on :file2 on :file3 ON :file4 ON delimiter '|' REJECTED DATA :reject_s1 EXCEPTIONS :except_s1 on Loading NULL values You can specify NULL values by entering fields in a data file without content. Reject/Exception files SPECIFIED. site02.SQL Reference Manual \set remote_dir /scratch_b/qa/tmp_ms/ Reject/Exception files NOT specified. site01. :reject_s2 on site02 site01. given the default delimiter (|) and default NULL (empty string). site02. site02 COPY fact FROM :file1 :file2 :file3 :file4 delimiter '|'. Reject/Exception files SPECIFIED.dat\'' '\'':remote_dir'C3_rej. Input is a single file on the initiator. The inputs are multiple files. and exceptions and rejection files go to the default directory on each node: \set \set \set \set file1 file2 file3 file4 '\'':dir'C1_rej.

5) (6. day AS TO_CHAR(k. month VARCHAR(10). null. CREATE PROJECTION tp ( year. null) If NULL is set as a literal ('null'). COPY t (year AS TO_CHAR(k. year | month | day | k ------+-----------+-----+--------------------- -335- . month. 1) (null. 'DD'). day. k timestamp ). null. timestamp. null. from the source database to the target database. day VARCHAR(10). month AS TO_CHAR(k. SELECT * FROM t. 3) (4. 2. 2. null. 1) (null. null. month. and day columns in the target database based on the timestamp column in the source database. 2009-06-17 1979-06-30 2007-11-26 \. null. null) Transforming Data The following example derives and loads values for the year.SQL Statements | | 4 6 | 2 | | 1 | 3 | 5 | are inserted into the table as follows: (null. It also loads the parsed column. 'YYYY'). k) AS SELECT * FROM t. k FORMAT 'YYYY-MM-DD') FROM STDIN NO COMMIT. 'Month'). 5) (6. CREATE TABLE t ( year VARCHAR(10). the following inputs null null 4 6 | null | 1 | 2 | 3 | null | 5 | null | null are inserted into the table as follows: (null. 3) (4.

month. and day columns in the source input. The year. k --------------2009-06-17 00:00:00 1979-06-30 00:00:00 2007-11-26 00:00:00 (3 rows) See Also SQL Data Types (page 89) ANALYZE_CONSTRAINTS (page 241) Loading Binary Data and Loading Character Data in the Administrator's Guide -336- . create projection tp (k) as select * from t.SQL Reference Manual 2009 | June 1979 | June 2007 | November (3 rows) | 17 | 30 | 26 | 2009-06-17 00:00:00 | 1979-06-30 00:00:00 | 2007-11-26 00:00:00 Ignoring Columns and Fields in the Load File The following example derives and loads the value for the timestamp column in the target database from the year. create table t (k timestamp). k as to_data(year || month || day. month. month FILLER varchar(10). copy t(year FILLER varchar(10). and day columns are not loaded because the FILLER keyword skips them. 2009|06|17 1979|06|30 2007|11|26 select * from t. day FILLER varchar(10). 'YYYYMMDD')) from STDIN no commit.

] [ ACCESSRANK integer ] ) AS SELECT table-column [ .. This is useful if you -337- ... Different projection-column names can be used to distinguish multiple columns of the same name from different tables so that no aliases are needed. they are inferred from the column names for the table specified in the SELECT statement. ... ] [ WHERE join-predicate (on page 83) [ AND join-predicate (on page 83) ] . [ ORDER BY table-column [ . encoding-type Specifies the type of encoding (see "encoding-type" on page 340) to use on the column. Note that you cannot specify specific encodings on projection columns using this method. Syntax CREATE PROJECTION [schema-name. ] FROM table-reference [ . the projection is created in the same schema as the anchor table. transaction integer). .. projection-column Specifies the name of a column in the projection. CREATE PROJECTION sales_p as select * from sales.]projection-name ( [ projection-column ] [ ENCODING encoding-type (on page 340) ] [ . Note: If projection schema is not specified. Caution: Using the NONE keyword for strings could negatively affect the behavior of string columns.. The Database Designer automatically chooses an appropriate encoding for each projection column. When using more than one schema.. If projection columns are not explicitly named.SQL Statements CREATE PROJECTION Creates metadata for a projection in the Vertica catalog..]projection-nam e Specifies the name of the projection to be created. The data type is inferred from the corresponding column in the schema table (based on ordinal position). The following example automatically uses store and transaction as the projection column names for sales_p: CREATE_TABLE sales(store integer.. ACCESSRANK integer Overrides the default access rank for a column. . specify the schema that contains the projection. . ] ] [ hash-segmentation-clause (on page 341) | range-segmentation-clause (on page 343) | UNSEGMENTED { NODE node | ALL NODES } ] Parameters [schema-name.

Specifies a list of schema tables containing the columns to include in the projection in the form: table-name [ AS ] alias [ ( column-alias [ ...SQL Reference Manual want to increase or decrease the speed at which a column is accessed. range-segmentation-clause NODE node ALL NODES Unsegmented Projection Naming CREATE PROJECTION . No other predicates are allowed. replicated projections have the name: projection-name_node-name For example. See Creating and Configuring Storage Locations and Prioritizing Column Access Speed. .] ) ] [ . Creates an unsegmented projection on the specified node only. Specifies which columns to sort.. In order to do distributed query execution. hash-segmentation-clause Allows you to segment a projection based on a built-in hash function that provides even distribution of data across nodes. See hash-segmentation-clause (on page 341). ORDER BY table-column Note: If you do not specify the sort order. NODE02.. SELECT table-column Specifies a list of schema table columns corresponding (in ordinal position) to the projection columns. Creates a separate unsegmented projection on each node at the time the CREATE PROJECTION statement is executed (automatic replication). resulting in optimal query execution. Because all projection columns are sorted in ascending order in physical storage. Vertica requires an exact.] ] table-reference WHERE join-predicate Specifies foreign-key = primary-key equijoins between the fact table and dimension tables. . UNSEGMENTED ALL NODES -338- . ABC_NODE02. Foreign key columns must be NOT NULL.. Dimension table projections must be UNSEGMENTED. CREATE PROJECTION does not allow you to specify ascending or descending. Vertica uses the order in which columns are specified in the column list as the sort order for the projection. Thus.. resulting in optimal query execution. See range-segmentation-clause (on page 343). and ABC_NODE03: CREATE PROJECTION ABC .. UNSEGMENTED takes a snapshot of the nodes defined at execution time to generate a node list in a predictable order. if the nodes are named NODE01. See projection naming note below. unsegmented copy of each dimension table superprojection on each node. and NODE03 then the following command creates projections named ABC_NODE01.. Allows you to segment a projection based on a known range of values stored in a specific column chosen to provide even distribution of data across a set of nodes.

customer_dimension_node0003. it is updated as part of INSERT.customer_dimension_node0004.customer_dimension_node0004. You can monitor the refresh operation by examining the vertica. DELETE or COPY statements.customer_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] public. public. where you must provide the name ABC_NODE01 instead of just ABC. the default is UNSEGMENTED on the node where the CREATE PROJECTION was executed. public.customer_dimension_node0002. After the CREATE PROJECTION is executed. If the tables over which the projection is defined already contain data you must issue START_REFRESH (page 308) to bring the projection up-to-date.customer_dimension_node0002 [Segmented: No] [Seg Cols: ] [K: 3] [public. the refresh does not begin until after a buddy projection is created. since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container.customer_dimension has 4 projections. Once a projection is up-to-date. if you execute 'select start_refresh()' the following message displays: Starting refresh background process However. CREATE PROJECTION does not load data into physical storage.customer_dimension_node0002] [Safe: Yes] [UptoDate: Yes][Stats: Yes] (1 row) Note: After a refresh completes.customer_dimension_node0004. public. for example. GET_PROJECTIONS (page 277) or GET_PROJECTION_STATUS (page 277). Table public. depending on how much data is in the tables. Notes Vertica recommends that you use multiple projection syntax for K-safe clusters. public. public.customer_dimension_node0003 [Segmented: No] [Seg Cols: ] [K: 3] [public. UPDATE.SQL Statements This naming convention could impact functions that provide information about projections. then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers.customer_dimension_node0003.). To view a list of the nodes in a database. For example: SELECT get_projections('customer_dimension'). If the table was created with a PARTITION BY clause. A projection is not refreshed until after a buddy projection is created. use the View Database command in the Administration Tools. the refreshed projections go into a single ROS container. get_projections ---------------------------------------------------------------Current system K is 1.customer_dimension_node0001 [Segmented: No] [Seg Cols: ] [K: 3] [public. public. # of Nodes: 4. public. -339- . This process could take a long time. If no segmentation is specified.customer_dimension_node0003.customer_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] public. Projection Name: [Segmented] [Seg Cols] [# of Buddies] [Buddy Projections] [Safe] [UptoDate] ---------------------------------------------------------public. public.customer_dimension_node0004 [Segmented: No] [Seg Cols: ] [K: 3] [public.customer_dimension_node0002.log file or view the final status of the projection refresh by using SELECT get_projections('table-name.customer_dimension_node0001] [Safe: Yes] [UptoDate: Yes][Stats: Yes] public.

data may expand by eight percent (8%) for LZO and twenty percent (20%) for integer data. and the data will never expand. Therefore. Long CHAR/VARCHAR columns are not good candidates for BLOCK_DICT encoding. CHAR and VARCHAR columns that contain 0x00 or 0xFF characters should not be encoded with BLOCK_DICT. it serves as the default if no encoding/compression is specified. The CPU requirements for this type are small. For INTEGER. Thus you should use it only when the run length is large. it's best used for low cardinality columns that are present in the ORDER BY clause of a projection. BINARY/VARBINARY columns do not support BLOCK_DICT encoding. BOOLEAN. Lempel-Ziv-Oberhumer-based (LZO) compression is used. ENCODING RLE Run Length Encoding (RLE) replaces sequences (runs) of identical values with a single pair that contains the value and number of occurrences. Encoding Auto is ideal for sorted. BLOCK_DICT is ideal for few-valued. In the worst case. Encoding Deltaval is best used for many-valued. and are good candidates for BLOCK_DICT. such as by stock symbol and timestamp. many-valued columns such as primary keys. It is also suitable for general purpose applications for which no other encoding or compression scheme is applicable. -340- . the compression scheme is based on the delta between consecutive column values. DATE/TIME/TIMESTAMP.SQL Reference Manual encoding-type Vertica supports the following encoding and compression types: ENCODING AUTO (default) For CHAR/VARCHAR. ENCODING DELTAVAL For INTEGER and DATE/TIME/TIMESTAMP/INTERVAL columns. and FLOAT columns. Therefore. unsorted columns in which saving space is more important than encoding speed. such as stock prices. Certain kinds of data. BINARY/VARBINARY. The storage for RLE and AUTO encoding of CHAR/VARCHAR and BINARY/VARBINARY is always the same. Also. and INTERVAL types. are typically few-valued within a localized area once the data is sorted. unsorted integer or integer-based columns. The CPU requirements for this type are relatively small. This encoding has no effect on other data types. such as when low-cardinality columns are sorted. Vertica compiles distinct column values into a dictionary and then stores the dictionary and a list of indexes to represent the data block. The Vertica execution engine processes RLE encoding run-by-run and the Vertica optimizer gives it preference. data is recorded as a difference from the smallest value in the data block. ENCODING BLOCK_DICT For reach block of storage.

increases processing time. The following sequence does not compress well: 1. ENCODING DELTARANGE_COMP This compression scheme is primarily used for floating point data. 1500. Be sure to use the same sort order as the projection and select sample data that will be stored consecutively in the database. 21. However. 15. 36. -341- . If you use this scheme on data with arbitrary deltas. This scheme is ideal for sorted FLOAT and INTEGER-based (DATE/TIME/TIMESTAMP/INTERVAL) data columns with predictable sequences and only the occasional sequence breaks. 900. 600. ENCODING NONE Do not specify this value. resulting in optimal query execution. 1200. 28. 10. 1800. it can lead to significant data expansion. Using ENCODING NONE on these columns increases space usage. 55. and leads to problems if 0x00 or 0xFF characters are present in the data. use of this type can lead to space savings if the distribution of values is extremely skewed. This scheme has a high cost for both compression and decompression. and it stores each value as a delta from the previous one. 3. However. The result of ENCODING NONE is the same as ENCODING AUTO except when applied to CHAR and VARCHAR columns. If the delta distribution is excellent. ENCODING BLOCKDICT_COMP This encoding type is similar to BLOCK_DICT except that dictionary indexes are entropy coded. the following sequence compresses well: 300. compare it to other schemes. 1200. such as timestamps recorded at periodic intervals or primary keys. as the storage cost for representing a NULL value is high.SQL Statements The encoding CPU for BLOCK_DICT is significantly higher than for default encoding schemes. 45. It is obsolete and exists only for backwards compatibility. 2400. this scheme is very CPU intensive. 600. Do not use this scheme for unsorted columns that contain NULL values. To determine if DELTARANGE_COMP is suitable for a particular set of data. This encoding type requires significantly more CPU time to encode and decode and has a poorer worst-case performance. hash-segmentation-clause Hash segmentation allows you to segment a projection based on a built-in hash function that provides even distribution of data across some or all of the nodes in a cluster. 6. For example. The maximum data expansion is eight percent (8%). columns can be sorted in less than one bit per row. ENCODING COMMONDELTA_COMP This compression scheme builds a dictionary of all the deltas in the block and then stores indexes into the delta dictionary using entropy coding. This scheme is ideal for many-valued FLOAT columns that are either sorted or confined to a range.

The ordering of the nodes is fixed.. SEGMENTED BY HASH(C1..SQL Reference Manual Note: Hash segmentation is the preferred method of segmentation in Vertica 2. CREATE PROJECTION . SEGMENTED BY HASH(C1. Is an integer that specifies the node within the ordered sequence on which to start the segmentation distribution. Choose columns that have a large number of unique data values and acceptable skew in their data distribution.. ALL NODES OFFSET offset NODES node [ . Table column names must be used in the expression. ] Notes • • • • Omitting an OFFSET clause is equivalent to OFFSET 0. § If expression produces a value outside the expected range (a negative value for example).. use the View Database command in the Administration Tools. For a list of the nodes in a database. Automatically distributes the data evenly across all nodes at the time the CREATE PROJECTION statement is executed. Syntax SEGMENTED BY expression [ ALL NODES [ OFFSET offset ] | NODES node [ . and the row is added to the first segment of the projection.. the following restrictions apply: § All leaf expressions must be either constants (on page 55) or column-references (see "Column References" on page 74) to a column in the SELECT list of the CREATE PROJECTION command § Aggregate functions are not allowed § The expression must return the same value over the life of the database.C2) ALL NODES OFFSET 1.0 and later.. You can use a specific node only once in any projection. and the values should be uniformly distributed over that range. See example below. Primary key columns that meet the criteria could be an excellent choice for hash segmentation. relative to 0. not the new projection column names If you want to use a different SEGMENTED BY expression.. The Database Designer uses hash segmentation by default. Examples CREATE PROJECTION .. § The expression must return non-negative INTEGER values in the range 0 <= x < 263 (two to the sixty-third power or 2^63). -342- .. Specifies a subset of the nodes in the cluster over which to distribute the data. no error occurs.. but there is no reason to use anything other than the built-in HASH (page 181) or MODULARHASH (page 183) functions with table columns as arguments. CREATE PROJECTION accepts the deprecated syntax SITES node for compatibility with previous releases. ] ] Parameters SEGMENTED BY expression Can be a general SQL expression.C2) ALL NODES.

CREATE PROJECTION fact_ts_2 (f_price. use SELECT * FROM NODE_RESOURCES. cost. You can use a specific node only once in any projection. cid. f_date) AS (SELECT price. See Also HASH (page 181) and MODULARHASH (page 183) range-segmentation-clause Range segmentation allows you to segment a projection based on a known range of values stored in a specific column chosen to provide even distribution of data across a set of nodes. resulting in optimal query execution. Choose a column that has: INTEGER or FLOAT data type A known range of data values An even distribution of data values A large number of unique data values Avoid columns that: Are foreign keys Are used in query predicates Have a date/time data type Have correlations with other columns due to functional dependencies. If you choose this option. Inc. Note: Vertica Systems. recommends that you use hash segmentation. f_cost. For a list of the nodes in a database.SQL Statements The example produces two hash-segmented buddy projections that form part of a K-Safe design. do not use TIME or TIMETZ because their range is only 24 hours. The projections can use different sort orders. f_tid. Specifies that this segment can contain a range of data values less NODE node VALUES LESS THAN value -343- . Syntax SEGMENTED BY expression NODE node VALUES LESS THAN value NODE node VALUES LESS THAN MAXVALUE Parameters (Range Segmentation) SEGMENTED BY expression Is a single column reference (see "Column References" on page 74) to a column in the SELECT list of the CREATE PROJECTION statement. Note: Segmenting on DATE/TIME data types is valid but guaranteed to produce temporal skew in the data distribution and is not recommended. Is a symbolic name for a node. dwdate FROM fact) SEGMENTED BY ModularHash(dwdate) ALL NODES OFFSET 2. f_cid. tid. instead of range segmentation.

the user name is used as the schema name. and the row is added to a segment of the projection. In other words. The schema name must be distinct from all other schemas within the database. If a user name is not provided. the minimum value of the range is determined by the value of the previous segment (if any). Notes • The SEGMENTED BY expression syntax allows a general SQL expression but there is no reason to use anything other than a single column reference (see "Column References" on page 74) for range segmentation. Only a Superuser is allowed to create AUTHORIZATION user_name -344- . See DEPRECATED syntax in the Troubleshooting Guide. the user who creates the schema is assigned ownership. In other words. During INSERT or COPY to a segmented projection. • • • See Also NODE_RESOURCES (page 444) CREATE SCHEMA Defines a new schema. CREATE PROJECTION with range segmentation allows the SEGMENTED BY expression to be a single column-reference to a column in the projection-column list for compatibility with previous releases. no error occurs. Assigns ownership of the schema to a user.SQL Reference Manual than the specified value. except that segments cannot overlap. The maximum value depends on the data type of the segmentation column. CREATE PROJECTION with range segmentation accepts the deprecated syntax SITE node for compatibility with previous releases. the following restrictions apply: § All leaf expressions must be either constants (on page 55) or column-references (see "Column References" on page 74) to a column in the SELECT list of the CREATE PROJECTION command § Aggregate functions are not allowed § The expression must return the same value over the life of the database. if expression produces a value outside the expected range (a negative value for example). If you want to use a different expression. Syntax CREATE SCHEMA schemaname [ AUTHORIZATION user_name ] Parameters schemaname Specifies the name of the schema to create. it represents a value greater than the maximum value that can exist in the data. MAXVALUE Specifies a sub-range with no upper limit. If the schema name is not provided. This syntax is considered to be a deprecated feature and causes a warning message.

the user must either be a superuser or have CREATE privilege for the database. The following example creates a schema named S1 with a table named T. and DROP SCHEMA (page 360) -345- . these sub-statements are treated as if they have been entered as individual commands after the CREATE SCHEMA statement has completed: • • If the AUTHORIZATION statement is used. The CREATE SCHEMA statement and all its associated sub-statements are completed as one transaction. If any of the statements fail. Notes Optionally. Aniket. Examples The following example creates a schema named S1 with no objects. Aniket. all tables are owned by the specified user. CREATE SCHEMA S1. It grants Fred.SQL Statements a schema that is owned by a different user. Pequan. the entire CREATE SCHEMA statement is rolled back. Pequan GRANT ALL ON TABLE T TO Fred. Aniket. and Pequan access to all existing tables and ALL privileges on table T: CREATE SCHEMA S1 CREATE TABLE T (C INT) GRANT USAGE ON SCHEMA S1 TO Fred. See Also ALTER SCHEMA (page 314). SET SEARCH_PATH (page 397). Restrictions To create a schema. CREATE SCHEMA could include the following sub-statements to create tables within the schema: • CREATE TABLE (page 346) • GRANT With the following exceptions.

-346- . logically. All leaf expressions must be either constants or columns of the table. . these limits. are detected during the course of operation. All other expressions must be functions and operators. ] ) [ PARTITION BY partition-clause ] Parameters [schema-name. For each partitioned projection. When using more than one schema. Defines one or more columns. explicit and implicit upper limits are imposed on the number of partitions a projection can have. Logically. aggregate functions and subqueries are not permitted in the expression. there are as many partitions as the number of unique values returned by the partitioned expression applied over the tuples of the projection.]table-name Specifies the name of the table to be created. Every column in the table must exist in at least one projection before you can store data in the table. Note: Due to the impact on the number of ROSs. Note: CREATE TABLE does not create a projection corresponding to the table.. Syntax CREATE TABLE [schema-name. such as during COPY. however. Creating a partitioned table does not necessarily force all data feeding into a table’s projection to be segregated immediately.. Partitioning specifies how data is organized at individual nodes in a cluster and after projection data is segmented.]table_name ( column-definition (on page 348) [ . the partition clause is applied after the segmented by clause. partition-clause must calculate an idempotent value from its arguments. See column-definition (on page 348). specify the schema that contains the table.346 CREATE TABLE Creates a table in the logical schema. partition-clause must be not null. column-definition partition-clause Usage Creating a table with the partition clause causes all projections anchored on that table to be partitioned according to the partitioning clause. only then is the data partitioned at each node based on the criteria in the partitioning clause.

SELECT MARK_DESIGN_KSAFE(1) Cancelling a CREATE TABLE statement can cause unpredictable results. The following example partitions data by year: CREATE TABLE fact ( . Category_Description char(32). state VARCHAR2 NOT NULL. Shelf_Width integer. Vertica Systems. Inc. • Examples The following example creates a table named Product_Dimension in the Retail schema: CREATE TABLE Retail.. Fat_Content integer.. then use DROP TABLE (page 362)..) PARTITION BY state.. date_col date NOT NULL. CREATE PROJECTION . . If a database has had automatic recovery enabled. . recommends that you allow the statement to finish.. Package_Size char(32). In other words. The following example partitions data by state: CREATE TABLE fact(. Product_Description varchar(128). Diet_Type char(32).. you must: SELECT MARK_DESIGN_KSAFE(0) CREATE TABLE . Package_Type_Description char(32). you must temporarily disable automatic recovery in order to create a new table.Product_Dimension ( Product_Key integer NOT NULL. Weight integer.. Department_Description char(32) NOT NULL... Weight_Units_of_Measure char(32). Shelf_Depth integer ).. See Also CREATE TEMPORARY TABLE (page 351) DROP_PARTITION (page 266) DROP PROJECTION (page 360) DUMP_PARTITION_KEYS (page 270) DUMP_PROJECTION_PARTITION_KEYS (page 270) -347- ...) PARTITION BY extract('year' FROM date_col)..SQL Statements Restrictions • • CREATE TABLE does not allow table constraints.. only column and correlation constraints. SKU_Number char(32) NOT NULL. Shelf_Height integer.

and constraints to be applied to a column.SQL Reference Manual DUMP_TABLE_PARTITION_KEYS (page 271) PARTITION_PROJECTION (page 288) PARTITION_TABLE (page 289) Partitioning Tables in the Administrator's Guide column-definition A column definition specifies the name. ] ] Parameters column-name data-type Specifies the name of a column to be created or added.. Specifies one of the following data types: BINARY (page 89) BOOLEAN (page 93) CHARACTER (page 94) DATE/TIME (page 96) NUMERIC (page 103) Specifies a column constraint (see "column-constraint" on page 349) to apply to the column. Syntax column-name data-type [ column-constraint (on page 349) [ . data type.. column-constraint -348- .

The default value can be any variable-free expression as long as it matches the data type of the column. If no DEFAULT value is specified and no value is provided. the default is the primary key of table-name. If there is no value specified for the column and no default. See Adding Constraints in the Administrator's Guide. [Default] Specifies that the column is allowed to contain null values. Syntax [ CONSTRAINT constraint-name ] { [ NOT ] NULL | PRIMARY KEY | REFERENCES table-name | UNIQUE [ DEFAULT default ] } Parameters CONSTRAINT constraint-name NULL NOT NULL Optionally assigns a name to the constraint. Adds a referential integrity constraint defining the column as the primary key.349 column-constraint Adds a referential integrity constraint to the metadata of a column. Specifies the table to which the REFERENCES constraint applies. Ensures that the data contained in a column or a group of columns is unique with respect to all the rows in the table. Variable-free expressions can contain: § Constants § SQL functions § Null handling functions § System information functions § String functions -349- . If column is omitted. Adds a referential integrity constraint defining the column as a foreign key. Vertica recommends that you name all constraints. Specifies the column to which the REFERENCES constraint applies. the INSERT or UPDATE statement returns an error because no default value exists. the default is null. Specifies that the column must receive a value during INSERT and UPDATE operations. the default is the primary key of table. Default value usage: PRIMARY KEY REFERENCES table-name column-name UNIQUE DEFAULT default • • • A default value can be set for a column of any data type. [Optional] Specifies a default data value for a column if the column is used in an INSERT operation and no value is specified for the column. If column is omitted.

but a number data-type can be cast to character data type implicitly. RANDOM(). CURRVAL(). -350- . Subqueries and cross-references to other columns in the table are not permitted in the expression. Volatile functions are not supported when adding columns to existing tables. for example: CREATE TABLE fact(c1 INTEGER PRIMARY KEY). For example. • The return value of a Default expression cannot be NULL. • The return data type of the default expression after evaluation should either match that of the column for which it is defined or an implicit cast between the two data-types should be possible.SQL Reference Manual § § § § • • Numeric functions Formatting functions Nested functions All Vertica supported operators Default value restrictions: Expressions can contain only constant arguments. and SETVAL() are not supported. CREATE TABLE dim (c1 INTEGER REFERENCES fact). SYSDATE(). You must specify NOT NULL constraints on columns that are given PRIMARY and REFERENCES constraints. The columns in the referenced table do not need to be explicitly specified. • • Example The following creates the store dimension table and sets the default column value for Store_state to MA: CREATE TABLE store_dimension (Store_state CHAR (2) DEFAULT MA). a character value cannot be cast to a numeric data type implicitly. Vertica does not support expressions in the DEFAULT clause. TIMEOFDAY(). Notes • A FOREIGN KEY constraint can be specified solely by a REFERENCE to the table that contains the PRIMARY KEY. For example. • Default expressions when evaluated should conform to the bounds for the column. See ALTER TABLE (page 316).

Specifies one of the following data types: BINARY (page 89) BOOLEAN (page 93) CHARACTER (page 94) DATE/TIME (page 96) NUMERIC (page 103) table-name column-name data-type -351- .and session-scoped GLOBAL temporary tables. Specifies the name of the temporary table to be created. Syntax CREATE [ [ GLOBAL ] { TEMPORARY | TEMP } ] TABLE table-name ( { column-name data-type [ DEFAULT default ] [ NULL | NOT NULL ] } . Temporary tables in Vertica are always global. a reporting tool holds intermediate results while reports are generated (for example.. to concurrently use the same temporary table but see only data specific to his or her own transactions for the duration of those transactions or sessions.351 CREATE TEMPORARY TABLE Vertica supports transaction. Typically. and so on). Temporary table data is visible only to the session that inserts the data into the table. The definition of the temporary table persists in the database catalogs until explicitly removed by using the DROP TABLE (page 362) statement.. You can also write subqueries. This allows two users. then query the result set. ) [ ON COMMIT { DELETE | PRESERVE } ROWS ] [ NO PROJECTION ] Parameters GLOBAL [Optional] Specifies that the table definition is visible to all sessions. The CREATE TEMPORARY TABLE command defines a GLOBAL temporary table as one that is visible to all users and sessions. first get a result set. A common use case for a temporary table is to divide complex query processing into multiple steps. Data is automatically removed when the transaction commits or rolls back or the session ends. A and B. Specifies the name of a column to be created in the new temporary table. The contents (data) of table are private to the transaction or session where the data was inserted.

The default value can be any variable-free expression as long as it matches the data type of the column. a character value cannot be cast to a numeric data type implicitly. For example. but a number data-type can be cast to character data type implicitly. TIMEOFDAY(). See ALTER TABLE (page 316). and SETVAL() are not supported. the INSERT or UPDATE statement returns an -352- . Specifies that the column must receive a value during INSERT and UPDATE operations.SQL Reference Manual DEFAULT default [Optional] Specifies a default data value for a column if the column is used in an INSERT operation and no value is specified for the column. If there is no value specified for the column and no default. The return value of a Default expression cannot be NULL. Default expressions when evaluated should conform to the bounds for the column. SYSDATE(). Default value usage: • • • A default value can be set for a column of any data type. Volatile functions are not supported when adding columns to existing tables. The return data type of the default expression after evaluation should either match that of the column for which it is defined or an implicit cast between the two data-types should be possible. CURRVAL(). the default is null. RANDOM(). For example. Subqueries and cross-references to other columns in the table are not permitted in the expression. Variable-free expressions can contain: § Constants § SQL functions § Null handling functions § System information functions § String functions § Numeric functions § Formatting functions § Nested functions § All Vertica supported operators Expressions can contain only constant arguments. Default value restrictions: • • • • • • NULL NOT NULL [Default] Specifies that the column is allowed to contain null values. If no DEFAULT value is specified and no value is provided.

FROM TEMP TABLE syntax does not truncate data when the table was created with PRESERVE.. • • • • • • • • • -353- .SQL Statements error because no default value exists. ON COMMIT { PRESERVE | DELETE } ROWS [Optional] Specifies whether data is transaction. DELETE ROWS is the default. it marks rows for deletion. You cannot add projections to non-empty. [Optional] Prevents the automatic creation of a default superprojection for the temporary table. session-scoped temporary tables. bypass the default projection and instead create replicated projections on the temporary tables just as you would do for a normal query. only session-specific data is truncated with no affect on data in other sessions.. session-scoped temporary table data is not visible using system (virtual) tables. Vertica automatically chooses a sort order and compression techniques for the projection. Both return all committed and uncommitted data regardless of epoch. For example. AT EPOCH LATEST queries that refer to session-scoped temporary tables work the same as those for transaction-scoped temporary tables. For example. NO PROJECTION Notes • • Queries involving temporary tables have the same restrictions on SQL support as normal queries that do not use temporary tables. If you issue the TRUNCATE TABLE (page 403) statement against a temporary table. Vertica automatically creates a default superprojection for a temporary table. when using a temporary table as a dimension table in a star query. Vertica supports session-scoped isolation and statement-level rollback of temporary table data. advance the epoch. Vertica truncates the table (delete all its rows) when you terminate a session. use the NO PROJECTION keyword when creating the temporary table and then use the CREATE PROJECTION statement to create your own custom projections. This projection is unsegmented and has the property that any data inserted into the table is local only to the node that initiated the transaction. Prejoin projections that refer to both temporary and non-temporary tables are not supported. In order to override this default behavior. and then commit data in a new epoch.or session-scoped: DELETE marks the temporary table for transaction-scoped data. In general. Single-node (pinned to the initiator node only) projections are supported. The DELETE . Make sure you create projections before you load data. Vertica truncates the table (delete all its rows) after each commit. PRESERVE marks the temporary table for session-scoped data. Moveout and mergeout operations cannot be used on session-scoped temporary data. you can commit data from a temporary table in one epoch. See DELETE (page 358) for additional details. which is preserved beyond the lifetime of a single transaction.

By contrast.SQL Reference Manual Example Session-scoped rows in a GLOBAL temporary table can be preserved for the whole session or for the current transaction only. ON COMMIT DELETE ROWS indicates that the data should be deleted at the end of the transaction. y NUMERIC ) ON COMMIT PRESERVE ROWS. CREATE GLOBAL TEMP TABLE temp_table2 ( x NUMERIC. See Also ALTER TABLE (page 316). y NUMERIC ) ON COMMIT DELETE ROWS. DROP TABLE (page 362). CREATE GLOBAL TEMP TABLE temp_table1 ( x NUMERIC. CREATE TABLE (page 346). ON COMMIT PRESERVE ROWS indicates that the data should be preserved until the end of the session. in the first statement below. For example. DELETE (page 358). IMPLEMENT_TEMP_DESIGN (page 279) Subqueries in the Programmer's Guide Transactions in the Concepts Guide -354- .

Is the default. Newly-created users do not have access to schema PUBLIC by default. You can change a user password by using the ALTER USER statement.355 CREATE USER Adds a name to the list of authorized database users. are not allowed. Unencrypted passwords are visible in the catalog in clear text.A md5 encryption scheme is used Is the password to assign to the user. Vertica Systems recommends that all database users have encrypted passwords. a server that accepts remote connections could have many database users who have no local operating system account. and in such cases there need be no connection between database user names and OS user names. Syntax CREATE USER name [ WITH [ ENCRYPTED | UNENCRYPTED ] PASSWORD 'password' ] Parameters name Specifies the name of the user to create. User names are not case-sensitive. including IN GROUP. If all the users of a particular server also have accounts on the server's machine. • • • • Examples CREATE USER Fred. The following options are allowed but ignored: § SYSID uid § CREATEDB § NOCREATEDB § CREATEUSER § NOCREATEUSER § VALID UNTIL Other options. Unless the database is used solely for evaluation purposes. ENCRYPTED password Notes • • • Only a superuser can create a user. Tip: Vertica database user names are logically separate from user names of the operating system in which the server runs. names that contain special characters must be double-quoted. If you want to configure a user to not have any password authentication. -355- . you can set the empty password ‘’ in CREATE or ALTER USER statements. Make sure to GRANT USAGE ON SCHEMA PUBLIC to all users you create. However. it makes sense to assign database user names that match their operating system user names.

SELECT on all the tables and views referenced within the view's defining query.] ) ] AS query ] Parameters viewname Specifies the name of the view to create. Use a SELECT (page 382) statement to specify the query. If the view name is not provided. update. and other views. CREATE VIEW Defines a new view. Vertica automatically deduces the column names from the query.. and it does not check for dependencies.The SELECT statement can refer to tables. delete. or projection within the database. [Optional] Specifies the list of names to be used as column names for the view. Original Query: SELECT * FROM ship.SQL Reference Manual GRANT USAGE ON SCHEMA PUBLIC to Fred.shipping_dimension. column_name query Notes Views are read only. You cannot perform insert. The following example defines a view (ship) and illustrates how a query that refers to the view is transformed.. the user name is used as the view name. Only the specified view is dropped. Restrictions To create a view. Syntax CREATE VIEW viewname [ ( column_name [. the user must be a superuser or have the following privileges: • • CREATE on the schema in which the view is created. Do not use the same name as any table. Columns are presented from left to right in the order given. Specifies the query that the view executes. view. If not specified. Use the DROP VIEW (page 364) statement to drop a view. Vertica also uses the query to deduce the list of names to be used as columns names for the view if they are not specified. Dropping a view causes any view that references it to fail. The view name must be unique. temp tables. When Vertica processes a query that contains a view. -356- . . or copy operations on a view.shipping_dimension) AS ship. the view is treated as a subquery because the view name is replaced by the view's defining query. Vertica does not support CASCADE functionality for views. Transformed query: SELECT * FROM (SELECT * FROM public. View: CREATE VIEW ship AS SELECT * FROM public.

and REVOKE (View) (page 377) -357- .000. CA CO FL GA IL MA MI MS TN TX UT WA See Also SELECT (page 382).000. customer_state FROM public.000.customer_dimension WHERE customer_key IN (SELECT customer_key FROM store. GRANT (View) (page 372). SELECT * FROM 2278679481 | 558455361 | 226947952 | 252410919 | 288327492 | 275529410 | 249433841 | 360368119 | 331044783 | 870156932 | 200938064 | 265890220 | (12 rows) myview WHERE SUM > 2000000000.store_sales_fact) GROUP BY customer_state ORDER BY customer_state ASC The following example uses the myview view with a WHERE clause that limits the results to combined salaries of greater than 2. DROP VIEW (page 364).SQL Statements • USAGE on all the schemas that contain the tables and views referenced within the view's defining query. Example CREATE VIEW myview AS SELECT SUM(annual_income).

so be cautions of WOS Overload. but the columns. Syntax DELETE FROM [schema_name. specify the schema that contains the table in your DELETE statement. the user must have both SELECT (page 382) and DELETE privileges on the table. DELETE marks records for deletion in the WOS. The effect is similar to when a COMMIT is issued. It does not delete data from disk storage for base tables. Notes • • • • • If the DELETE operation succeeds on temporary tables.]table WHERE clause (on page 386) Parameters [schema_name. and constraints are preserved. you cannot roll back to a prior savepoint. which greatly improves performance. To use DELETE or UPDATE (page 408) commands with a WHERE clause. Specifies the name of a base table or temporary table. In this special case. You cannot delete records from a projection. thus making it easy to re-populate the table. DELETE behaves the same as for base tables. use a DELETE statement with no WHERE clause.C1. If you include a WHERE clause when performing delete operations on temporary tables. DELETE FROM T WHERE C1=C2-C1.358 DELETE Marks tuples as no longer valid in the current epoch. in that all rows are removed. When using more than one schema. Examples The following command truncates a temporary table called temp1: DELETE FROM temp1. marking all delete vectors for storage.] table Specifies the name of an optional schema. the rows are not stored in the system. projections. DELETE FROM temp_table is the only way to truncate a temporary table without ending the transaction. and you lose any performance benefits. Using DELETE for temporary tables To remove all rows from a temporary table. -358- . The following command deletes all records from base table T where C1 = C2 .

See Also DROP TABLE (page 362) and TRUNCATE TABLE (page 403) Deleting Data in the Administrator's Guide -359- . 'NH').customer WHERE state IN ('MA'.SQL Statements The following command deletes all records from the customer table in the retail schema where the state attribute is in MA or NH: DELETE FROM retail.

Drops the projection only if it does not contain any objects. and MARK_DESIGN_KSAFE (page 285) Adding Nodes in the Administrator's Guide DROP SCHEMA Permanently removes a schema from the database. See Also CREATE PROJECTION (page 337). so DROP PROJECTION fails if a projection is the table's only superprojection. RESTRICT CASCADE Notes In previous versions of Vertica.. Drops the projection even if it contains one or more objects. Syntax DROP PROJECTION projname [ . In such cases.fact_proj_a. If you want to drop a set of buddy projections. See MARK_DESIGN_KSAFE (page 285) for details.. specify the schema that contains the projection. Alternatively. you can issue a command like the following.projname'. you could drop all projections from a table. schema1. When using more than one schema. DROP TABLE (page 362). To a drop projections: DROP PROJECTION prejoin_p_site02. -360- . This is an irreversible process. use the DROP TABLE command.SQL Reference Manual DROP PROJECTION Marks a projection to be dropped from the catalog so it is unavailable to user queries. you could be prevented from dropping them individually using a sequence of DROP PROJECTION statements due to K-Safety violations. RESTRICT is the default.] [ RESTRICT | CASCADE ] Parameters projname Matches the projname from a CREATE PROJECTION statement and reverses its effect.fact_proj_b. which drops projections on a particular schema: DROP PROJECTION schema1. Be sure that you want to remove the schema and all its objects before you drop it. projname can be 'projname' or 'schema. GET_PROJECTION_STATUS (page 277). In order to prevent data loss and inconsistencies. projname . GET_PROJECTIONS (page 277). tables must now contain one superprojection.

To force a drop. Restrictions • • • • • The PUBLIC schema cannot be dropped. A schema can only be dropped by its owner or a superuser. Cancelling a DROP SCHEMA statement can cause unpredictable results. a schema cannot be dropped if it contains one or more objects. Examples The following example drops schema S1 only if it doesn't contain any objects: DROP SCHEMA S1 The following example drops schema S1 whether or not it contains objects: DROP SCHEMA S1 CASCADE -361- . use the CASCADE statement.SQL Statements Syntax DROP SCHEMA schema [ CASCADE | RESTRICT ] Parameters schema CASCADE RESTRICT Specifies the name of the schema Drops the schema even if it contains one or more objects Drops the schema only if it does not contain any objects (the default) By default. Notes A schema owner can drop a schema even if the owner does not own all the objects within the schema. All the objects within the schema is also dropped. If a user is accessing any object within a schema that is in the process of being dropped. the schema is not deleted until the transaction completes.

]table [ CASCADE ] Parameters [schema-name. Dropping a table causes any view that references it to fail. -362- • . a message listing the projections displays.362 DROP TABLE Drops a table and. its associated views and projections. [Optional] Drops all projections that include the table and all views that reference the table.] table [Optional] Specifies the name of an optional schema. If you try to drop an table that has associated projections. CASCADE Caution: Dropping a table and its associated projections can destroy the K-Safety of your physical schema design. as long as the new table contains the same columns and column names. Canceling a DROP TABLE statement can cause unpredictable results. Note: The schema owner can drop a table but cannot truncate a table. specify the schema that contains the table in the DROP TABLE statement. Use this command only when absolutely necessary.. Specifies the name of a schema table. When using more than one schema. Syntax DROP TABLE [ schema-name. DROP TABLE Notes • • • • The table owner or schema owner or super user can drop a table. CASCADE to drop the dependent objects too. Vertica recommends that you make sure that all other users have disconnected before using DROP TABLE.depends on Table d1 NOTICE: Projection d1p1 depends on Table d1 NOTICE: Projection d1p2 depends on Table d1 NOTICE: Projection d1p3 depends on Table d1 NOTICE: Projection f1d1p1 depends on Table d1 NOTICE: Projection f1d1p2 depends on Table d1 NOTICE: Projection f1d1p3 depends on Table d1 ERROR: DROP failed due to dependencies: Cannot drop Table d1 because other objects depend on it HINT: Use DROP . For example: => DROP TABLE d1.. NOTICE: Constraint . However views that reference a table that is dropped and then replaced by another table with the same name continue to function using the contents of the new table. optionally. Use the multiple projection syntax in K-safe clusters. => DROP TABLE d1 CASCADE.

SQL Statements See Also DELETE (page 358). DROP PROJECTION (page 360). and TRUNCATE TABLE (page 403) Adding Nodes and Deleting Data in the Administrator's Guide -363- .

Syntax DROP VIEW viewname [ . . it returns an error. Dropping a view causes any view that references it to fail.. Notes • • Only the specified view is dropped. the user must be either a superuser or the person who created the view.. Views that reference a view or table that is dropped and then replaced by another view or table with the same name continue to function using the contents of the new view or table if it contains the same column names. DROP VIEW Removes the specified view. Examples DROP VIEW myview. Syntax DROP USER name [. ] Parameters viewname Specifies the name of the view to drop.364 DROP USER Removes a name from the list of authorized database users. if possible. . Vertica does not support cascade functionality for views and it does not check for dependencies.] Parameters name Specifies the name or names of the user to drop. Otherwise. -364- . If the column data type changes. Restrictions To drop a view. the server coerces the old data type to the new one.. Examples DROP USER Fred..

1[label="ValExpNode"]. Table Oid. You can obtain a Fedora Core 4 RPM for GraphViz from: yum -y install graphviz A example of a GraphViz graph for a Vertica plan: digraph G { graph [rankdir=BT] 0[label="Root"]. | INSERT. For information on how to interpret the output. Table Oid... Table Oid.3 Pred: Y Out: P ID:5 Cost:1 Card:-1 DS: Position Filtered by ID:4 ProjCol:c_cid. • A compact human-readable representation of the query plan.2 Pred: Y Out: P ID:4 Cost:0.1 Pred: N Out: V ID:6 Cost:1 Card:-1 DS: Position Filtered by ID:4 ProjCol:c_state.c_state)"].365 EXPLAIN Outputs the query plan.7 Card:-1 Projection: P0 ID:2 Cost:0.. Syntax EXPLAIN { SELECT.Attr#:25424.Attr#:25424..Attr#:25424. 3[label="PDS(P0. Graphviz is a graph plotting utility with layout algorithms.Attr#:25424. laid out hierarchically.1 Card:-1 DS: Value Idx ProjCol:c_state. -365- • . Table Oid.Attr#:25424.c_gender)"]..4 Pred: Y Out: P ID:3 Cost:0.3 Card:-1 DS: Position Filtered by ID:2 ProjCol:c_gender. Table Oid. | UPDATE. } Output Note: The EXPLAIN command is provided as a support feature and is not fully described here. 2[label="VDS:DVIDX(P0. etc..4 Pred: N Out: V A GraphViz format of the graph for display in a graphical format. contact Technical Support (page 33). For example: Vertica QUERY PLAN DESCRIPTION: -----------------------------ID:1 Cost:2.3 Card:-1 DS: Position Filtered by ID:3 ProjCol:c_name.

SQL Reference Manual • 4[label="PDS(P0.c_name)"]. 1->0 [label="V"]. 4.10" -Gmargin="0... 2->3 [label="P"].txt > /tmp/x. 7->1 [label="P+V"]. dot -Tps /tmp/x.ps [evince x.txt: 1. To scale an image for printing (8. 5->7 [label="P"]. Landscape: dot -Tps -Gsize="10. 3->4 [label="P"].c_state)"].ps 2. copy the output above to a file. Alternative: dot -Tps | ghostview . } To create a picture of the plan. 5[label="Copy"].5"x11" in this example): 6. 1->0 [label="V"]. Alternative: generate jpg using -Tjpg..5. 7. Portrait: dot -Tps -Gsize="7. in this example /tmp/x.c_cid)"]..and paste in the digraph. -366- .5" -Grotate="90" . 4->5 [label="P"]. 6->1 [label="P+V"].ps works if you don't have ggv] 3. 5->6 [label="P"].5" -Gmargin="0. ggv x. 6[label="PDS(P0. 5. 7[label="PDS(P0.7.5" .

SQL Statements Example: -367- .

SQL Reference Manual -368- .

php) -369- .graphviz.org/Documentation.php (http://www.org/Documentation.graphviz.SQL Statements GraphViz Information http://www.

Make sure to grant USAGE on schema PUBLIC to all users you create... Grants the privilege to a specific user. Is for SQL standard compatibility and is ignored. ALL PRIVILEGES schemaname username PUBLIC WITH GRANT OPTION Notes Newly-created users do not have access to schema PUBLIC by default. Allows the recipient of the privilege to grant it to other users.] TO { username | PUBLIC } [. Note: In a database with trust authentication.370 GRANT (Schema) Grants privileges on a schema to a database user.] | ALL [ PRIVILEGES ] } ON SCHEMA schemaname [.. -370- .. . . Syntax GRANT { { CREATE | USAGE } [. Allows the user access to the objects contained within the schema.. the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database. Is synonymous with CREATE. See the GRANT TABLE (page 371) and GRANT VIEW (page 372) statements. . This allows the user to look up objects within the schema.] [ WITH GRANT OPTION ] Parameters CREATE USAGE Allows the user read access to the schema and the right to create tables and views within the schema. Note that the user must also be granted access to the individual objects.. Grants the privilege to all users. Is the name of the schema for which privileges are being granted.

UPDATE DELETE REFERENCES ALL PRIVILEGES [schema-name. Allows DELETE of a row from the specified table. -371- .]tablename [. Note: In a database with trust authentication.] TO { username | PUBLIC } [.371 GRANT (Table) Grants privileges on a table to a user. . Notes To use the DELETE (page 358) or UPDATE (page 408) commands with a WHERE clause (page 386). Specifies the user to whom to grant the privileges. specify the schema that contains the table on which to grant privileges. Syntax GRANT { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [. a user must have both SELECT and UPDATE and DELETE privileges on the table...] [ WITH GRANT OPTION ] Parameters SELECT INSERT Allows the user to SELECT from any column of the specified table. the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database. Allows the user to INSERT tuples into the specified table and to use the COPY (page 323) command to load the table.. INSERT. DELETE. while COPY FROM <file> is an admin-only operation. UPDATE. Grants the privilege to all users.] | ALL [ PRIVILEGES ] } ON [ TABLE ] [schema-name. Is for SQL standard compatibility and is ignored.. Specifies the table on which to grant the privileges. Allows the user to grant the same privileges to other users. When using more than one schema. Note: COPY FROM STDIN is allowed to any user granted the INSERT privilege. Is synonomous with SELECT..]tab lename username PUBLIC WITH GRANT OPTION Allows the user to UPDATE tuples in the specified table.. Is necessary to have this privilege on both the referencing and referenced tables in order to create a foreign key constraint.. REFERENCES. .

Note: In a database with trust authentication.] [ WITH GRANT OPTION ] Parameters SELECT PRIVILEGES ALL [schema-name.SQL Reference Manual GRANT (View) Grants privileges on a view to a database user. specify the schema that contains the view. . Syntax GRANT { { SELECT } | ALL [ PRIVILEGES ] } ON [schema-name.. Specifies the view on which to grant the privileges. . the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database.]viewname [.] TO { username | PUBLIC } [.]vie wname username PUBLIC WITH GRANT OPTION Allows the user to perform SELECT operations on a view and the resources referenced within it. Is synonomous with SELECT. Is for SQL standard compatibility and is ignored. Specifies the user to whom to grant the privileges.. Grants the privilege to all users. Allows the user to grant the same privileges to other users... -372- . When using more than one schema.

Instead. Specifies a list of values to store in the correspond columns.]tabl e column Writes the data directly to disk (ROS) instead of memory (WOS). This is necessary in order to use INSERT . If no value is supplied for a column.. Specifies a column of the table. SELECT. -373- ... if the column is defined as NOT NULL. DEFAULT VALUES Fills all columns with their default values as specified in CREATE TABLE (page 346). .]table [ ( column [.. specify the schema that contains the table..] ) ] { DEFAULT VALUES | VALUES ( { expression | DEFAULT } [. You cannot INSERT tuples into a projection. Isolation level applies only to the SELECT clauses and work just like an normal query except that you cannot use AT EPOCH LATEST or AT TIME in an INSERT .. (page 382) } Parameters /*+ direct */ [schema-name. This syntax is only valid when used with INSERT.. .. Vertica implicitly adds a DEFAULT value. Notes • An INSERT . returns an error..373 INSERT Inserts values into the Write Optimized Store (WOS) for all projections of a table.. SELECT while the database is being loaded.. When using more than one schema. SELECT statement. statement refers to tables in both its INSERT and SELECT clauses. Specifies a value to store in the corresponding column. VALUES expression DEFAULT Stores the default value in the corresponding column..] ) | SELECT. if present.... Specifies a query (SELECT (page 382) statement) that supplies the rows to be inserted. Syntax INSERT [ /*+ direct */ ] INTO [schema-name.. Otherwise Vertica inserts a NULL value or.SELECT. SELECT . use the SET TRANSACTION CHARACTERISTICS (page 398) statement to set the isolation level to READ COMMITTED.. Specifies the name of a table in the schema..

the savepoint is unavailable as a rollback point. or the first N column names. LCOPY command is available only from the ODBC interface. except that it loads data from a client system. ODBCConnection<ODBCDriverConnect> test("VerticaSQL"). 'male'. 'MA'. if there are only N columns supplied by the VALUES clause or query.connect().conn). 35). -374- . char *sql = "LCOPY test FROM 'C:\load. The values supplied by the VALUES clause or query are associated with the explicit or implicit column list left-to-right.SQL Reference Manual • • • You can list the target columns in any order. INSERT INTO films SELECT * FROM tmp_films WHERE date_prod < '2004-05-07'. rather than a cluster host. INSERT INTO CUSTOMER VALUES (10. 'DPR'. test. ODBCStatement stm(test. 103. 104). RELEASE SAVEPOINT Destroys a savepoint without undoing the effects of commands executed after the savepoint was established. You must insert one complete tuple at a time. Be aware of WOS Overload.dat' DELIMITER '|'. C1) VALUES (1. Example The following code loads the table TEST from the file C:\load. Syntax RELEASE [ SAVEPOINT ] savepoint_name Parameters savepoint_name Specifies the name of the savepoint to destroy. Examples INSERT INTO FACT VALUES (101.T1 (C0. INSERT INTO Retail. LCOPY Is identical to the COPY (page 323) command. stm. Notes Once destroyed.execute(sql). 1001). If no list of column names is given at all. 102. the default is all the columns of the table in their declared order.dat located on a system where the code is executed.".

SQL Statements Example The following example establishes and then destroys a savepoint called my_savepoint. See Also SAVEPOINT (page 380) and ROLLBACK TO SAVEPOINT (page 379) -375- . INSERT INTO product_key VALUES (102). RELEASE SAVEPOINT my_savepoint. COMMIT. SAVEPOINT my_savepoint. INSERT INTO product_key VALUES (101). The values 101 and 102 are both inserted at commit.

] FROM { username | PUBLIC } [. CREATE USAGE ALL PRIVILEGES schema-name username PUBLIC -376- . revokes both the privilege and the grant option. not the privilege itself. If omitted.] Parameters GRANT OPTION FOR Revokes the grant option for the privilege. the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database..376 REVOKE (Schema) Revokes privileges on a schema from a user... .. .] | ALL [ PRIVILEGES ] } ON SCHEMA schema-name [... See GRANT (Schema) (page 370).. Note: In a database with trust authentication. Syntax REVOKE [ GRANT OPTION FOR ] { { CREATE | USAGE } [.

revokes both the privilege and the grant option. Syntax REVOKE [ GRANT OPTION FOR ] { { SELECT | INSERT | UPDATE | DELETE | REFERENCES } [. If -377- . Note: In a database with trust authentication..]tabl ename username PUBLIC REVOKE (View) Revokes privileges on a view from a user. SELECT INSERT UPDATE DELETE REFERENCES ALL PRIVILEGES [schema-name.377 REVOKE (Table) Revokes privileges on a table from a user.] | ALL [ PRIVILEGES ] } ON [ TABLE ] [schema-name.. ...] Parameters GRANT OPTION FOR Revokes the grant option for the privilege.] FROM { username | PUBLIC } [.. not the privilege itself.. the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database.] Parameters GRANT OPTION FOR Revokes the grant option for the privilege. Syntax REVOKE [ GRANT OPTION FOR ] { { SELECT } } ON [ VIEW ] [schema-name. the GRANT and REVOKE statements appear to work as expected but have no actual effect on the security of the database. .... Note: In a database with trust authentication. See GRANT (Table) (page 371)..]viewname [. not the privilege itself. If omitted..] FROM { username | PUBLIC } [. .]tablename [. .

]vie wname username PUBLIC Allows the user to perform SELECT operations on a view and the resources referenced within it. Grants the privilege to all users. Is for SQL standard compatibility and is ignored. Specifies the view on which to grant the privileges. specify the schema that contains the view. When using more than one schema. -378- . Specifies the user to whom to grant the privileges. SELECT PRIVILEGES [schema-name.SQL Reference Manual omitted. revokes both the privilege and the grant option.

When an operation is rolled back. they are optional keywords for readability. Example The following example rolls back the values 102 and 103 that were entered after the savepoint. COMMIT. any locks that are acquired by the operation are also rolled back. ROLLBACK TO SAVEPOINT implicitly destroys all savepoints that were established after the named savepoint. Syntax ROLLBACK [ WORK | TRANSACTION ] Parameters WORK TRANSACTION Have no effect. any locks that are acquired by the operation are also rolled back. INSERT INTO product_key VALUES (104). ROLLBACK TO SAVEPOINT Rolls back all commands that have been entered within the transaction since the given savepoint was established. Notes When an operation is rolled back. was established.379 ROLLBACK Ends the current transaction and discards all changes that occurred during the transaction. Syntax ROLLBACK TO [SAVEPOINT] savepoint_name Parameters savepoint_name Specifies the name of the savepoint to roll back to. INSERT INTO product_key VALUES (102). ROLLBACK TO SAVEPOINT my_savepoint. my_savepoint. INSERT INTO product_key VALUES (101). -379- . Only the values 101 and 104 are inserted at commit. SAVEPOINT my_savepoint. Notes • • • The savepoint remains valid and can be rolled back to again later if needed. INSERT INTO product_key VALUES (103).

Example The following example illustrates how a savepoint determines which values within a transaction can be rolled back. The values 102 and 103 that were entered after the savepoint. INSERT INTO T1 (product_key) VALUES INSERT INTO T1 (product_key) VALUES ROLLBACK TO SAVEPOINT my_savepoint. Only the values 101 and 104 are inserted at commit. the result of the subroutine could be rolled back if necessary. Syntax SAVEPOINT savepoint_name Parameters savepoint_name Specifies the name of the savepoint to create. my_savepoint. it is replaced with the new savepoint. Multiple savepoints can be defined within a transaction. a savepoint could be created at the beginning of a subroutine. Tip: Savepoints are useful when creating nested transactions. restoring the transaction to the state it was in at the point in which the savepoint was established. SELECT product_key FROM T1. (104). (103). was established are rolled back. (102). Notes • • • Savepoints are local to a transaction and can only be established when inside a transaction block. That way. If a savepoint with the same name already exists. inside a transaction. See Also RELEASE SAVEPOINT (page 374) and ROLLBACK TO SAVEPOINT (page 379) -380- . -101 104 (2 rows) (101). INSERT INTO T1 (product_key) VALUES COMMIT. A savepoint allows all commands that are executed after it was established to be rolled back.SQL Reference Manual See Also RELEASE SAVEPOINT (page 374) and SAVEPOINT (page 380) SAVEPOINT SAVEPOINT is a transaction control command that creates a special mark. called a savepoint. INSERT INTO T1 (product_key) VALUES SAVEPOINT my_savepoint. For example.

SQL Statements -381- .

Note: By default.382 SELECT Retrieves a result set from one or more tables. use AT EPOCH LATEST.. Queries all data in the database up to and including the epoch representing the specified date and time without holding a lock or blocking write operations.] ) ] ] * | expr [ AS ] output_name ] [. . DISTINCT Removes duplicate rows from the result set (or group). An extremely large and wide result set can cause swapping.] ] [ LIMIT (see "LIMIT Clause" on page 393) { count | ALL } ] [ OFFSET (see "OFFSET Clause" on page 394) start ] Parameters AT EPOCH LATEST Queries all data in the database up to but not including the current epoch without holding a lock or blocking write operations. .The DISTINCT set quantifier must immediately follow the SELECT keyword.] ] [ UNION (page 404) ] [ ORDER BY expr (see "ORDER BY Clause" on page 391) [ ASC | DESC | USING operator ] [.. ... queries execute under the SERIALIZABLE isolation level. AT EPOCH LATEST is ignored when applied to temporary tables (all rows are returned). Forms the output rows of the SELECT statement.. [ AT EPOCH LATEST ] | [ AT TIME 'timestamp' ] SELECT [ ALL | DISTINCT ] ON ( expression [.. which holds locks and blocks write operations.. . Is equivalent to listing all columns of the tables in the FROM Clause (page 384)..] ] [ HAVING condition (see "HAVING Clause" on page 390) [.] [ FROM (see "FROM Clause" on page 384) [.. The expression can contain: Column References (on page 74) to columns computed in the FROM clause (page 384) Constants (on page 55) Mathematical Operators (on page 68) String Concatenation Operators (on page 70) Aggregate Expressions (page 72) CASE Expressions (page 73) SQL Functions (page 111) Can be an analytic function (page 125)..] ] [ WHERE condition (see "WHERE Clause" on page 386) ] [ GROUP BY expression (page 388) [. AT TIME 'timestamp' * Note: Vertica recommends that you avoid using SELECT * for performance reasons. AT TIME is ignored when applied to temporary tables (all rows are returned). ... expression expr -382- . This is called an Historical Query. For optimal query performance. Only one DISTINCT keyword can appear in the select list. See Snapshot Isolation for more information. .

See Also Analytic Functions (page 125) Subqueries and Joins in the Programmer's Guide -383- . Notes The SELECT list (between the key words SELECT and FROM) specifies expressions that form the output rows of the SELECT command. It can also be used to refer to the column's value in ORDER BY (page 391) and GROUP BY (page 388) clauses.SQL Statements output_name Specifies a different name for an output column. This name is primarily used to label the column for display. but not in the WHERE (page 386) or HAVING (page 390) clauses.

.. column aliases.. the DISTINCT keyword makes sure each region is returned only once. table-reference (on page 384) ] . [ subquery ] [AS] name .. SELECT DISTINCT customer_region FROM customer_dimension.384 FROM Clause Specifies one or more source tables from which to retrieve rows. Example In the following example.. Parameters table-reference Is a table-primary (on page 384) or a joined-table (on page 385). and outer joins. . Syntax FROM table-reference (on page 384) [ . table-primary Syntax { table-name [ AS ] alias [ ( column-alias [ . Specifies an outer join.....] ] | ( joined-table (on page 385) ) } -384- . customer_region ----------------East MidWest NorthWest South SouthWest West (6 rows) table-reference Syntax table-primary (on page 384) | joined-table (on page 385) Parameters table-primary joined-table Specifies an optionally qualified table name with optional table aliases.] ) ] [ .

Vertica selects a suitable projection to use. Specifies a temporary name to be used for references to the table. Specifies an outer join. join-predicate Notes • • A query that uses INNER JOIN syntax in the FROM clause produces the same result set as a query that uses the WHERE clause to state the join-predicate.SQL Statements Parameters table-name alias column-alias joined-table Specifies a table in the logical schema. The left-joined (outer) table in an outer join is the anchor table. See the topic. Is one of the following: INNER JOIN LEFT [ OUTER ] JOIN RIGHT [ OUTER ] JOIN An equi-join based on one or more columns in the joined tables. joined-table Syntax table-reference join-type table-reference ON join-predicate (on page 83) Parameters table-reference join-type Is a table-primary (page 384) or another joined-table. Specifies a temporary name to be used for references to the column. -385- . "ANSI Join Syntax" in Joins in the Programmer's Guide for more information.

customer_name -----------------------Alexander Brown Alexander Greenwood Alexander Martin Alexander Miller Alexander Rodriguez Alexander Weaver Alexander A. Jackson Alexander A. Note: Without the WHERE clause filter.386 WHERE Clause Eliminates rows from the result table that do not satisfy one or more predicates. Example The following example returns the names of 20 customers in the Eastern region. SELECT DISTINCT customer_name FROM customer_dimension WHERE customer_region = 'East'. and boolean operators. Jones -386- . Only rows for which the expression is true become part of the result set. predicates. Syntax WHERE boolean-expression [ subquery ] . The boolean-expression can include Boolean operators (on page 65) and the following elements: • • • • • • • BETWEEN-predicate (on page 79) Boolean-predicate (on page 80) Column-value-predicate (on page 81) IN-predicate (on page 82) Join-predicate (on page 83) LIKE-predicate (on page 84) NULL-predicate (on page 86) Usage You can use parentheses to group expressions... For example: WHERE NOT (A=1 AND B=2) OR C=3. Parameters boolean-expression Is an expression that returns true or false. the query returns all customer names in the customer_dimension table.

B.SQL Statements Alexander Alexander Alexander Alexander Alexander Alexander Alexander Alexander Alexander Alexander Alexander Alexander (20 rows) A. B. A. Lewis Li McCabe Wilson Gauthier Jackson Jefferson Lewis Nguyen Perkins Rodriguez Fortin -387- . B. B. C. B. A. B. A. B.

customer_name | customer_city ---------------------+--------------Alexander Bauer | Boston Alexander Brown | Green Bay Alexander Brown | Lafayette Alexander Fortin | Palmdale Alexander Garcia | Jacksonville Alexander Goldberg | Downey Alexander Greenwood | Independence Alexander Jackson | Beaumont Alexander Kramer | Elizabeth Alexander Kramer | Phoenix (10 rows) The following example also looks for customer names but sums the average of their annual income and sorts the results by customer name: SELECT customer_name. customer_city FROM customer_dimension GROUP BY customer_name. All non-aggregated columns in the SELECT list must be included in the GROUP BY clause.. Notes • • The expression cannot include aggregate functions (page 112). ] Parameters expression Is any expression including constants and references to columns (see "Column References" on page 74) in the tables specified in the FROM clause.. Syntax GROUP BY expression [ .388 GROUP BY Clause GROUP BY divides a query result set into groups of rows that match an expression. AVG(annual_income) AS average_income FROM customer_dimension GROUP BY customer_name LIMIT 10.. customer_city LIMIT 10. SELECT customer_name. customer_name | average_income ---------------------+---------------Alexander Bauer | 914301 Alexander Brown | 545041 Alexander Fortin | 81858 Alexander Garcia | 307990 -388- . Examples The following example looks for customer name and city and groups the results by customer name.

COUNT SELECT RTRIM(customer_city) || LTRIM(customer_state) AS city_state.04338395 | 461 BellevueWA | 2307308.83801296 | 463 AustinTX | 1721137.SQL Statements Alexander Goldberg | Alexander Greenwood | Alexander Jackson | Alexander Kramer | Alexander Li | Alexander Martin | (10 rows) SELECT product_key + store_key FROM store.53629977 | 427 BaltimoreMD | 1929520.05393258 | 445 Ann ArborMI | 3052162. sales_quantity + sales_dollar_amount AS sales.store_sales_fact GROUP BY key.93208431 | 427 ArvadaCO | 3070815.customer_name" must appear in the GROUP BY clause or be used in an aggregate function -389- . city_state | avg | count --------------+------------------+------AbileneTX | 2505185. COU FROM customer_dimension GROUP BY RTRIM(customer_city) || LTRIM(customer_state).40227273 | 440 AthensGA | 2131104.38325991 | 454 BeaumontTX | 2751110. key | sales | count -----+-------+------38 | 190 | 1 43 | -481 | 1 44 | -274 | 1 68 | 312 | 1 73 | 38 | 1 83 | 278 | 1 84 | 151 | 1 100 | 324 | 1 101 | 190 | 1 107 | 377 | 1 (10 rows) 909254 725711 748682 419818 742141 633009 AS key. AVG(annual_income). ERROR: column "customer_dimension. sales.24637681 | 414 AlexandriaVA | 1353318.53249476 | 477 AllentownPA | 1966380.94799054 | 423 (10 rows) Invalid Examples The following example returns an error because the GROUP BY clause specifies a column not included in the select list: SELECT customer_name FROM customer_dimension GROUP BY customer_city.

MAX(annual_salary) as "highest_salary" FROM employee_dimension GROUP BY employee_last_name HAVING MAX(annual_salary) > 50000.. employee_last_name | highest_salary --------------------+---------------Bauer | 920149 Brown | 569079 Campbell | 649998 Carcetti | 195175 Dobisz | 840902 Farmer | 804890 Fortin | 481490 Garcia | 811231 Garnett | 963104 Gauthier | 927335 (10 rows) -390- . Example The following example returns the employees with salaries greater than $50..] Parameters predicate is the same as specified for the WHERE clause (page 386). Syntax HAVING predicate [.390 HAVING Clause Eliminates group rows that do not satisfy a predicate. unless the reference appears within an aggregate function.000: SELECT employee_last_name. Notes • • Each column referenced in predicate must unambiguously reference a grouping column. You can use expressions in the HAVING clause. .

391 ORDER BY Clause Sorts a query result set on one or more columns. CHAR. sorted by customer_city in ascending order. SELECT customer_city FROM customer_dimension WHERE customer_name = 'Metamedia' ORDER BY customer_city. . This makes it possible to order by a column that does not have a unique name. (You can assign a name to a result column using the AS clause.. and VARCHAR. In general the order is: § Space § Numbers § Uppercase letters § Lowercase letters Special characters collate in between and after the groups mentioned.] Parameters expression Can be: • • • Notes • • The name or ordinal number of a SELECT list item An arbitrary expression formed from columns that do not appear in the SELECT list A CASE (page 73) expression • • The ordinal number refers to the position of the result column. INT. counting from the left beginning at one. BOOLEAN. Example The follow example returns all records for customer Metamedia. For INTEGER. NULL appears last (largest) in ascending order. See man ascii for details..) Vertica uses the ASCII collating sequence to store data and to compare character strings. customer_city --------------Dallas Erie Fort Collins Green Bay -391- . NULL appears first (smallest) in ascending order. and DATE/TIME data types. Syntax ORDER BY expression [ ASC | DESC ] [. For FLOAT.

SQL Reference Manual Las Vegas McAllen San Diego San Francisco San Jose Wichita Falls (10 rows) -392- .

Non-deterministic: Omits the ORDER BY clause and returns any five records from the customer_dimension table: SELECT customer_city FROM customer_dimension LIMIT 5. You can use LIMIT without an ORDER BY clause (page 391) that includes all columns in the select list. Vertica skips the specified number of rows before it starts to count the rows to be returned. customer_city --------------Baltimore Nashville Allentown Clarksville Baltimore (5 rows) Deterministic: Specifies the ORDER BY clause: SELECT customer_city FROM customer_dimension ORDER BY customer_city LIMIT 5.393 LIMIT Clause Specifies the maximum number of result set rows to return. customer_city --------------Abilene Abilene Abilene Abilene Abilene (5 rows) -393- . but the query could produce non-deterministic results. Syntax LIMIT { rows | ALL } Parameters rows ALL Specifies the maximum number of rows to return Returns all rows (same as omitting LIMIT) Notes When both LIMIT and OFFSET (page 394) are used.

use the OFFSET clause to skip over the first five cities: SELECT customer_city FROM customer_dimension WHERE customer_name = 'Metamedia' ORDER BY customer_city LIMIT 10 OFFSET 5. Notes • • When both LIMIT (page 393) and OFFSET are specified. however. When using OFFSET. specified number of rows are skipped before starting to count the rows to be returned. Otherwise the query returns an undefined subset of the result set. Syntax SET run-time-parameter Parameters run-time-parameter Is one of the following: DATESTYLE (page 396) SEARCH_PATH -394- . If you want to see just records 6-10. Example The following example is similar to the the example used in the LIMIT clause (page 393). Syntax OFFSET rows Parameters rows specifies the number of result set rows to omit. use an ORDER BY clause (page 391).394 OFFSET Clause Omits a specified number of rows from the beginning of the result set. customer_city --------------McAllen San Diego San Francisco San Jose Wichita Falls (10 rows) SET Sets one of several run-time parameters.

SQL Statements SESSION CHARACTERISTICS (page 398) TIME ZONE (page 399) -395- .

non-conflicting values: Value MDY DMY YMD ISO SQL GERMAN Interpretation month-day-year day-month-year year-month-day ISO 8601/SQL standard (default) traditional style regional style Example 12/17/1997 17/12/1997 1997-12-17 1997-12-17 07:37:16-08 12/17/1997 07:37:16.396 DATESTYLE The SET DATESTYLE command changes the DATESTYLE run-time parameter for the current session. (See Date/Time Constants (page 58) for how this setting also affects interpretation of input values. The name of the "SQL" output format is a historical accident.00 CET 12/17/1997 07:37:16..1997 07:37:16. MDY. ] ] [ days ] [ hours:minutes:seconds ] The SHOW (page 402) command displays the run-time parameters. day appears before month if DMY field ordering has been specified. except that units like CENTURY or WEEK are converted to years and days and AGO is converted to an appropriate sign.. ] Parameters The DATESTYLE parameter can have multiple. Syntax SET DATESTYLE TO { value | 'value' } [ .12.) The table below shows an example. MDY Notes • • • The SQL standard requires the use of the ISO 8601 format. otherwise month appears before day. DMY SQL. In ISO mode the output looks like [ quantity unit [ .00 PST SQL. INTERVAL output looks like the input format... DATESTYLE Input Ordering day/month/year month/day/year Example Output 17/12/1997 15:37:16. Example SET DATESTYLE TO SQL.00 PST 17. -396- .00 PST In the SQL style..

-397- . Notes The first schema named in the search path is called the current schema. . The default value for this parameter is '"$user".. The current schema is the first schema that Vertica searches. and V1: SET SEARCH_PATH TO T1. Restrictions None Examples The following example sets the order in which Vertica searches schemas to T1. If the schema does not exist. Syntax SET SEARCH_PATH TO schemaname [ .. It is also the schema in which new tables are created if the CREATE TABLE (page 346) command does not specify a schema name. Vertica provides the SET search_path statement instead of the CURRENT_SCHEMA statement found in some other databases. V1. Public is ignored if there is no schema named 'public'. public is the public database. U1.SQL Statements SEARCH_PATH Specifies the order in which Vertica searches schemas when a SQL statement contains an unqualified table name. public' Where: $User is the schema with the same name as the current user. ] Parameters schemaname A comma-delimited list of schemas that indicates the order in which Vertica searches schemas when a SQL statement contains an unqualified table name. $User is ignored. U1.

This is a high-level notion of read-only that does not prevent all writes to disk. serially. DELETE. the following SQL commands are disallowed: INSERT. Snapshot Isolation By itself. Syntax SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL { SERIALIZABLE | REPEATABLE READ | READ COMMITTED | READ UNCOMMITTED } { READ WRITE | READ ONLY } Parameters ISOLATION LEVEL Determines what data the transaction can access when other transactions are running concurrently. Is automatically converted to READ COMMITTED by Vertica. and DROP commands. When a transaction is read-only. This level emulates transactions executed one after another. This is standard ANSI SQL semantics for ACID transactions. Provides the strictest level of SQL transaction isolation and is the default in Vertica. Determines whether the transaction is read/write or read-only. Is automatically converted to SERIALIZABLE by Vertica. SELECT queries return the same result set as AT EPOCH LATEST plus any changes made by the current transaction. Read/write is the default. SERIALIZABLE REPEATABLE READ READ COMMITTED READ UNCOMMITTED READ WRITE READ ONLY READ COMMITTED vs. Any select query within a transaction should see the transactions's own changes regardless of isolation level. AT EPOCH LATEST produces purely historical query behavior. and blocks write operations and is thus not recommended for normal query operations. These are the isolation level and the access mode (read/write or read-only). DELETE. and EXPLAIN if the command it would execute is among those listed. Allows concurrent transactions. all CREATE. UPDATE) of a transaction has been executed. GRANT. and COPY if the table they would write to is not a temporary table. with READ COMMITTED. REVOKE. It holds locks. rather than concurrently. ALTER. UPDATE. However.398 SESSION CHARACTERISTICS SET SESSION CHARACTERISTICS sets the transaction characteristics for subsequent transactions of a user session. -398- . It does not apply to temporary tables. Use READ COMMITTED isolation or Snapshot Isolation for normal query operations but be aware that there is a subtle difference between them (see below). The isolation level cannot be changed after the first query (SELECT) or DML statement (INSERT.

if TZ is undefined. Positive integer values represent an offset east from UTC. Syntax SET TIME ZONE TO { value | 'value' } Parameters value Is one of the following: One of the time zone names specified in the tz database.com/tz/tz-link. -. which set the time zone to the one specified in the TZ environment variable or. A signed integer representing an offset from UTC in hours An interval value (page 60) Notes • • TIME ZONE is a synonym for TIMEZONE. The SHOW (page 402) command displays the run-time parameters.Italy '-7'. do not omit the country or the city.invalid Include the required keyword TO. as described in Sources for Time Zone and Daylight Saving Time Data http://www.htm. Both are allowed in Vertica syntax. • • • • Examples SET SET SET SET SET TIME TIME TIME TIME TIME ZONE ZONE ZONE ZONE ZONE TO TO TO TO TO DEFAULT.Berkeley. which are isolated by their transaction scope. For example: SET TIME ZONE TO 'Africa/Cairo'. -. Applications using SERIALIZABLE must be prepared to retry transactions due to serialization failures. When using a Country/City name. See Set the Default Time Zone and Using Time Zones with Vertica in the Installation and Configuration Guide. TIME ZONE Changes the TIME ZONE runtime parameter for the current session. Time Zone Names for Setting TIME ZONE (page 400) listed in the next section are for convenience only and could be out of date. -.twinsun. -.UDT offset equivalent to PDT INTERVAL '-08:00 HOURS'. California 'Europe/Rome'. See Also Using Time Zones with Vertica in the Installation and Configuration Guide -399- . -. 'PST8PDT'. The built-in constants LOCAL and DEFAULT.valid SET TIME ZONE TO 'Cairo'.SQL Statements Notes • • SERIALIZABLE isolation does not apply to temporary tables. from the operating system time zone.

Time Zone Africa America Antarctica Asia Atlantic Australia CET EET Etc/GMT Europe Factory GMT GMT+0 GMT-0 GMT0 Greenwich Etc/GMT Etc/GMT+0 Etc/GMT-0 Etc/GMT0 Etc/Greenwich Indian MET Pacific -400- . These are listed on the same line. These names are not the same as the names shown in Time Zone Abbreviations For Input. Refer to the Sources for Time Zone and Daylight Saving Time Data http://www. In many cases there are several equivalent names for the same zone.htm page for precise information.twinsun. The table is primarily sorted by the name of the principal city of the zone. The TIME ZONE names shown below imply a local daylight-savings time rule.SQL Reference Manual Time Zone Names for Setting TIME ZONE The following time zone names are recognized by Vertica as valid settings for the SQL time zone (the TIME ZONE run-time parameter).com/tz/tz-link. Note: The names listed here are for convenience only and could be out of date. where date/time input names represent a fixed offset from UTC. which are recognized by Vertica in date/time input values.

it would be accepted and would be functionally equivalent to USA East Coast time. leaving the system effectively using a rather peculiar abbreviation for GMT. where STD is a zone abbreviation. since there is no check on the reasonableness of the zone abbreviations. For example. so this feature is of limited use outside North America. assumed to stand for one hour ahead of the given offset. SET TIME ZONE TO FOOBANKO works. When a daylight-savings zone name is present. if EST5EDT were not already a recognized zone name. Vertica accepts time zone names of the form STDoffset or STDoffsetDST.SQL Statements UCT Etc/UCT UTC Universal Zulu Etc/UTC Etc/Universal Etc/Zulu WET In addition to the names listed in the table. One should also be wary that this provision can lead to silently accepting bogus input. offset is a numeric offset in hours west from UTC. For example. and DST is an optional daylight-savings zone abbreviation. it is assumed to be used according to USA time zone rules. -401- .

-402- . Syntax SHOW { name | ALL } Parameters name Is one of: DATESTYLE TIME ZONE Shows all runtime parameters. Syntax SHOW SEARCH_PATH Parameters None Restrictions None Example SHOW SEARCH_PATH. MDY timezone | America/New_York search_path | "$user". Examples SHOW ALL. ALL Notes The SET < Runtime Parameter > command sets the run-time parameters.402 SHOW Displays run-time parameters for the current session. name | setting -----------------------+-----------------datestyle | ISO. public transaction_isolation | SERIALIZABLE (4 rows) SHOW SEARCH_PATH Shows the order in which Vertica searches schemas when a SQL statement contains an unqualified table name.

Syntax TRUNCATE TABLE [schema_name. TRUNCATE TABLE takes X (exclusive) locks until the truncation process completes.. Specifies the name of a base table or temporary table.] table Specifies the name of an optional schema. the projections show 0 rows after the transaction completes. To truncate a temporary table specified AS ON COMMIT DELETE ROWS without ending the transaction. TRUNCATE TABLE commits the entire transaction. The deleting data guide should be updated to reflect this as well • • • -403- . If the table specified as ON COMMIT DELETE ROWS then DELETE FROM works like TRUNCATE TABLE. If the truncated table has out-of-date projections. Note: The effect of DELETE FROM depends on the table type. those projections are cleared and marked up-to-date after the truncation operation completes.SQL Statements name | setting -------------+----------------search_path | "$user".. If the truncated table is a dimension table." instead. otherwise it behaves like a normal delete in that it does not truncate the table. and are ready for data reload. even if the TRUNCATE statement fails. it could say "To truncate an "ON COMMIT DELETE ROWS" temporary table without. TRUNCATE TABLE can be useful for testing purposes.]table Parameters [schema_name. use DELETE FROM temp_table (page 358) syntax. The schema owner can drop a table but cannot truncate a table. when savepoint is then released. while preserving the table definitions. letting you remove all table data and not having to recreate projections. If the truncated table is a fact table and contains prejoin projections. and then issue the TRUNCATE command. the system returns the following error: Cannot truncate a dimension table with pre-joined projections Drop the prejoin projection first. TRUNCATE TABLE auto-commits the current transaction after statement execution and cannot be rolled back. Notes • • • • • • Only the superuser or database owner can truncate a table. So. public (1 row) TRUNCATE TABLE TRUNCATE TABLE is a DDL statement that removes all storage associated with a table.

GROUP BY and HAVING operations cannot be applied to the results. See Also DELETE (page 358) and DROP TABLE (page 362) Transactions in the Concepts Guide Deleting Data in the Administrator's Guide UNION Combines the results of two or more select statements.. Syntax SELECT . the data recovers from that current epoch onward.. the rightmost ORDER BY. a row in the results of a UNION operation must have existed in the results from one of the SELECT statements.. Vertica returns an error. or both. The results of a UNION contain only distinct rows. or OFFSET clause in the UNION query does not need to be enclosed in parentheses to the rightmost query.SQL Reference Manual • After truncate operations complete. Each SELECT statement must have the same number of items in the select list as well as compatible data types.. If the statement is not enclosed in parentheses an error is returned. LIMIT. as well as for unsegmented / segmented projections. [ OFFSET integer ] Note: SELECT statements can contain ORDER BY. .... unless duplicate rows are not wanted. therefore. [ ORDER BY { column-name | ordinal-number } [ ASC | DESC ] [.. LIMIT. . UNION pays the performance price of eliminating duplicates. [ UNION [ ALL ] select ]. UNION [ ALL ] select . Specifically.. or OFFSET clauses must be enclosed in parentheses .. LIMIT or OFFSET clauses if the statement is enclosed within parentheses. If the data types are incompatible. A SELECT statement containing ORDER BY. Because TRUNCATE TABLE removes all history for the table. However. so use UNION ALL to keep duplicate rows. [ LIMIT { integer | ALL } ] .. Usage The results of several SELECT statements can be combined into a larger result using UNION. -404- . ROS. use UNION ALL for its performance benefits.] ] .. Each SELECT statement produces results in which the UNION combines all those results into a final single result.. AT EPOCH queries return nothing. This indicates to perform these operations on results of the UNION operation. TRUNCATE TABLE behaves the same when you have data in WOS..

If ORDER BY is used. emp_lname FROM company_B. The resulting rows can be ordered by adding an ORDER BY to the UNION operation. id | emp_lname ------+----------1234 | Vincent 4321 | Marvin 5678 | Butch 8765 | Zed 9012 | Marcellus (5 rows) The following query lists all IDs and surnames of employees: -405- . SELECT * FROM T1 WHERE T1. The integers specify the position of the columns on which to sort. Examples Consider the following two tables: Company_A Id emp_lname dept sales ------+------------+-------------+------1234 | Vincent | auto parts | 1000 5678 | Butch | auto parts | 2500 9012 | Marcellus | floral | 500 Company B Id emp_lname dept sales ------+------------+-------------+------4321 | Marvin | home goods | 250 9012 | Marcellus | home goods | 500 8765 | Zed | electronics | 20000 The following query lists all distinct IDs and surnames of employees: SELECT id. as in the syntax above. only integers and column names from the first (leftmost) SELECT statement are allowed in the order by list. Notes UNION correlated and noncorrelated subquery predicates are also supported. The column names displayed in the results are the same column names that display for the first (leftmost) select statement.SQL Statements The ordering of the results of a UNION operation does not necessarily depend on the ordering of the results for each SELECT statement. emp_lname FROM company_A UNION SELECT id.x IN (SELECT MAX(c1) FROM T2 UNION ALL SELECT MAX(cc1) FROM T3 UNION ALL SELECT MAX(d1) FROM T4).

sales FROM company_B ORDER BY sales. sales FROM company_B ORDER BY sales LIMIT 2). return all employee orders by sales. emp_lname. emp_lname FROM company_B. id | emp_lname ------+----------1234 | Vincent 5678 | Butch 9012 | Marcellus 4321 | Marvin 8765 | Zed 9012 | Marcellus (6 rows) The next example returns the top two performing salespeople in each company combined: (SELECT id. sales FROM company_A ORDER BY sales LIMIT 2) UNION ALL (SELECT id.SQL Reference Manual SELECT id. emp_lname. emp_lname. id | emp_lname | sales ------+-----------+------4321 | Marvin | 250 9012 | Marcellus | 500 1234 | Vincent | 1000 5678 | Butch | 2500 8765 | Zed | 20000 (5 rows) -406- . emp_lname. sales FROM company_A UNION SELECT id. emp_lname FROM company_A UNION ALL SELECT id. id | emp_lname | sales ------+-----------+------4321 | Marvin | 250 9012 | Marcellus | 500 9012 | Marcellus | 500 1234 | Vincent | 1000 (4 rows) In this example. Note that the ORDER BY clause is applied to the entire result: SELECT id.

dept. SUM(sales) FROM company_b GROUP BY dept ORDER by 2 DESC) ORDER BY 1. company | dept | sum -----------+-------------+------company a | auto parts | 3500 company a | floral | 500 company b | electronics | 20000 company b | home goods | 750 (4 rows) The final query shows the results of a mismatched data types: SELECT id. id FROM company_b. and grouped by department: (SELECT 'company a' as company. dept. ordered by sales in descending order.SQL Statements And now sum the sales for each company. SUM(sales) FROM company_a GROUP BY dept ORDER by 2 DESC) UNION (SELECT 'company b' as company. emp_lname FROM company_a UNION SELECT emp_lname. ERROR: UNION types int8 and character varying cannot be matched See Also SELECT (page 382) Subqueries and UNION in Subqueries in the Programmer's Guide -407- .

For example: UPDATE T1 SET C1 = C1+1.COST * 80 WHERE COST > 100.]table SET column = { expression | DEFAULT } [. -408- .408 UPDATE Replaces the values of the specified columns in all rows for which a specific condition is true. You cannot UPDATE a projection. Examples UPDATE FACT SET PRICE = PRICE . UPDATE Retail.CUSTOMER SET STATE = 'NH' WHERE CID > 100. you must have both SELECT and DELETE privileges on the table.] [ FROM from-list ] [ WHERE clause (on page 386) ] Parameters [schema-name. Specifies a value to assign to the column. All other columns and rows in the table are unchanged. column expression from-list Notes • • • UPDATE inserts new records into the WOS and marks the old records for deletion.. A list of table expressions. Note that the target table must not appear in the from-list. To use the DELETE (page 358) or UPDATE (page 408) commands with a WHERE clause (page 386). allowing columns from other tables to appear in the WHERE condition and the update expressions. Syntax UPDATE [schema-name. . Specifies the name of a non-key column in the table.]table Specifies the name of a table in the schema. The expression can use the current values of this and other columns in the table. This is similar to the list of tables that can be specified in the FROM Clause (on page 384) of a SELECT command. When using more than one schema. You cannot UPDATE columns that have primary key or foreign key referential integrity constraints. Be aware of WOS Overload.. specify the schema that contains the table.

wos_used_bytes + ps.SQL System Tables (Monitoring APIs) Vertica provides system tables that let you monitor the health of your database. subqueries. the tool or script can notify the database administrator and/or appropriate IT personnel. using SELECT support. The following two tables list the catalog and monitor tables. -409- . See also Using the SQL Monitoring API in the Administrator's Guide. SUM(ps.table_name ORDER BY byte_count DESC. such as expressions. analytics.ros_used_bytes) > 500000 GROUP BY t. To view all of the system tables issue the following command: SELECT * FROM system_tables. when a host failure causes the K-Safety level to fall below a desired level.anchor_table_id JOIN projection_storage ps on p. Restrictions and Cautions • • DDL and DML are not supported System tables do not hold historical data You can use external monitoring tools or scripts to query the system tables and act upon the information as necessary. For example.table in queries unless you change the search path to exclude v_monitor and v_catalog.wos_used_bytes + ps. See Setting Schema Search Paths in the Administrator's Guide.projection_name WHERE (ps. These tables can be queried the same way you perform query operations on base or temporary tables. aggregates. predicates. and joins.table_id = p.ros_used_bytes) AS byte_count FROM tables t JOIN projections p ON t. For example: SELECT t. SUM(ps.projection_name = ps.wos_row_count + ps. table_name | row_count | byte_count --------------------+-----------+-----------online_sales_fact | 200000 | 11920371 store_sales_fact | 200000 | 7621694 product_dimension | 240000 | 7367560 customer_dimension | 200000 | 6981564 store_orders_fact | 200000 | 5126330 (5 rows) System tables are grouped into one of two schemas: • A catalog schema called v_catalog (page 411) • A monitoring schema called v_monitor (page 424) The system table schemas reside in the default search path so there is no need to specify schema.ros_row_count) AS row_count.table_name AS table_name.

Returns requests for resources that are rejected due to disk space shortages. Provides information about projections. Provides information about table constraints. Returns information about host profiling. -410- .SQL Reference Manual Table 1: System tables in the v_catalog schema Catalog Tables COLUMNS (page 411) FOREIGN_KEYS (page 412) GRANTS (page 413) PRIMARY_KEYS (page 415) PROJECTIONS (page 416) TABLE_CONSTRAINTS (page 418) TABLES (page 419) TYPES (page 420) USERS (page 421) VIEW_COLUMNS (page 421) VIEWS (page 422) SYSTEM_TABLES (page 423) Description Provides information about columns. Table 2: System tables in the v_monitor schema Monitor Tables ACTIVE_EVENTS (page 424) COLUMN_STORAGE (page 426) CURRENT_SESSION (page 428) DISK_RESOURCE_REJECTION S (page 431) DISK_STORAGE (page 432) EVENT_CONFIGURATIONS (page 435) EXECUTION_ENGINE_PROFILI NG (page 436) HOST_RESOURCES (page 437) Description Displays all the active events in the cluster. Provides information about tables in the database. Returns configuration information about current events. Provides information about all views within the system. Returns information regarding query execution runs. Provides information about users. Returns the amount of disk storage used by the database on each node. Provides information about supported data types. Provides grant information. Returns the amount of disk storage used by each column of each projection on each node. Provides primary key information. Displays a list of all system table names. Provides view attribute information. Provides foreign key information. Returns information about the current active session.

Returns system resource management on each node. Displays partition metadata. This is useful for regularly polling the node with automated tools or scripts. Returns the amount of disk storage used by each projection on each node. Monitors information about each storage container in the database. one row per partition key. Monitors the overall state of the database. Monitors the status of local nodes in the cluster. Monitors lock grants and requests for all nodes. Monitors information about WOS storage. Returns information regarding projections. which is divided into regions. per ROS container.SQL System Tables (Monitoring APIs) LOAD_STREAMS (page 440) LOCAL_NODES (page 441) LOCKS (page 441) NODE_RESOURCES (page 444) Returns load metrics for each load stream on each node. Provides basic session parameters and lock time out data. Monitors the sessions and queries executing on each node. Monitors external sessions. PARTITIONS (page 445) PROJECTIONS (page 416) PROJECTION_REFRESHES (page 446) PROJECTION_STORAGE (page 448) QUERY_METRICS (page 449) QUERY_PROFILES (page 450) RESOURCE_REJECTIONS (page 452) RESOURCE_USAGE (page 454) SESSION_PROFILES (page 457) SESSIONS (page 458) STORAGE_CONTAINERS (page 460) SYSTEM (page 461) TUPLE_MOVER_OPERATIONS (page 462) WOS_CONTAINER_STORAGE (page 463) V_CATALOG Schema The system tables in this section reside in the v_catalog schema. Monitors the status of the Tuple Mover on each node. Provides information regarding executed queries. Returns information about refresh operations for projections. COLUMNS Provides table column information. Provides a snapshot of the node. Returns requests for resources that are rejected by the resource manager. -411- .

The data type assigned to the column. where t is true and f is false.SQL Reference Manual Column Name TABLE_ID TABLE_SCHEMA TABLE_NAME IS_SYSTEM_TABLE COLUMN_NAME DATA_TYPE DATA_TYPE_ID CHARACTER_MAXIMUM_LENG TH ORDINAL_POSITION IS_NULLABLE COLUMN_DEFAULT Data Type INTEGER VARCHAR VARCHAR VARCHAR VARCHAR VARCHAR INTEGER VARCHAR VARCHAR VARCHAR VARCHAR Description The unique table OID assigned by the Vertica catalog The name of the schema The name of the table Indicates whether the table is a system table. table_name. The maximum allowable length of the column. Indicates whether the column can contain null values. such as empty or expression. column_name. A unique ID assigned by the <Vertica catalog. The name of the constraint. table_schema | table_name | column_name | data_type | is_nullable --------------+-------------------+------------------------+-----------+-----------store | store_dimension | first_open_date | Date | f store | store_dimension | last_remodel_date | Date | f store | store_orders_fact | date_ordered | Date | f store | store_orders_fact | date_shipped | Date | f store | store_orders_fact | expected_delivery_date | Date | f store | store_orders_fact | date_delivered | Date | f (6 rows) FOREIGN_KEYS Provides foreign key information. Example => SELECT table_schema. is_nullable FROM columns WHERE table_schema = 'store' AND data_type = 'Date'. data_type. -412- . Column Name CONSTRAINT_ID CONSTRAINT_NAME COLUMN_NAME Data Type INTEGER VARCHAR VARCHAR Description The object ID assigned by the Vertica catalog. where t is true and f is false. The name of the column that is constrained. The column's position in the table. The name of the column. The default value of a column.

The name of the schema References the TABLE_SCHEMA column in the PRIMARY_KEY table. The constraint type. -413- . ordinal_position. f. Example SELECT constraint_name.SQL System Tables (Monitoring APIs) ORDINAL_POSITION TABLE_NAME REFERENCE_TABLE_NAME CONSTRAINT_TYPE REFERENCE_COLUMN_NAM E TABLE_SCHEMA REFERENCE_TABLE_SCHE MA VARCHAR VARCHAR VARCHAR VARCHAR VARCHAR VARCHAR VARCHAR The position of the constraint respective to other constraints in the table. Column Name GRANTEE_ID Data Type INTEGER Description The grantee object ID (OID) from the catalog. reference_table_name FROM foreign_keys ORDER BY 3. constraint_name | table_name | ordinal_position | reference_table_name ---------------------------+-------------------+------------------+----------------------fk_store_sales_date | store_sales_fact | 1 | date_dimension fk_online_sales_saledate | online_sales_fact | 1 | date_dimension fk_store_orders_product | store_orders_fact | 1 | product_dimension fk_inventory_date | inventory_fact | 1 | date_dimension fk_inventory_product | inventory_fact | 2 | product_dimension fk_store_sales_product | store_sales_fact | 2 | product_dimension fk_online_sales_shipdate | online_sales_fact | 2 | date_dimension fk_store_orders_product | store_orders_fact | 2 | product_dimension fk_inventory_product | inventory_fact | 3 | product_dimension fk_store_sales_product | store_sales_fact | 3 | product_dimension fk_online_sales_product | online_sales_fact | 3 | product_dimension fk_store_orders_store | store_orders_fact | 3 | store_dimension fk_online_sales_product | online_sales_fact | 4 | product_dimension fk_inventory_warehouse | inventory_fact | 4 | warehouse_dimension fk_store_orders_vendor | store_orders_fact | 4 | vendor_dimension fk_store_sales_store | store_sales_fact | 4 | store_dimension fk_store_orders_employee | store_orders_fact | 5 | employee_dimension fk_store_sales_promotion | store_sales_fact | 5 | promotion_dimension fk_online_sales_customer | online_sales_fact | 5 | customer_dimension fk_store_sales_customer | store_sales_fact | 6 | customer_dimension fk_online_sales_cc | online_sales_fact | 6 | call_center_dimension fk_store_sales_employee | store_sales_fact | 7 | employee_dimension fk_online_sales_op | online_sales_fact | 7 | online_page_dimension fk_online_sales_shipping | online_sales_fact | 8 | shipping_dimension fk_online_sales_warehouse | online_sales_fact | 9 | warehouse_dimension fk_online_sales_promotion | online_sales_fact | 10 | promotion_dimension (26 rows) GRANTS Provides grant information. The name of the table References the TABLE_NAME column in the PRIMARY_KEY table. References the COLUMN_NAME column in the PRIMARY_KEY table. for foreign key. table_name.

The unique OID assigned by the Vertica catalog The name of the schema The name of the table Notes The vsql commands \dp and \z both include the schema name in the output: mydb=> \dp Access privileges for database "vmartdb" Grantee | Grantor | Privileges | Schema | Name ---------+---------+------------+--------+----------------| release | USAGE | | public | vertica | USAGE | | monitoring | vertica | USAGE | | catalog | vertica | USAGE | | system | release | USAGE | | v_internal | release | USAGE | | v_catalog | release | USAGE | | v_monitor | release | USAGE | | designer_system (8 rows) mydb=> \z Access privileges for database "vmartdb" Grantee | Grantor | Privileges | Schema | Name ---------+---------+------------+--------+----------------| release | USAGE | | public | vertica | USAGE | | monitoring | vertica | USAGE | | catalog | vertica | USAGE | | system | release | USAGE | | v_internal | release | USAGE | | v_catalog | release | USAGE | | v_monitor | release | USAGE | | designer_system (8 rows) The vsql command \dp *. The user granting permission. SELECT. for example INSERT.SQL Reference Manual GRANTEE GRANTOR_ID GRANTOR PRIVILEGES PRIVILEGES_DESCRIPTI ON OBJECT_ID TABLE_SCHEMA TABLE_NAME VARCHAR INTEGER VARCHAR INTEGER VARCHAR INTEGER VARCHAR VARCHAR The user being granted permission. The object ID from the catalog. This command lets you distinguish the grants for same-named tables in different schemas: -414- . A readable description of the privileges being granted.tablename displays table names in all schemas. The bitmask representation of the privileges being granted.

PRIMARY_KEYS Provides primary key information. UPDATE. DELETE. DELETE. SELECT. UPDATE. The name of the column. The name of the table The constraint type.events.SQL System Tables (Monitoring APIs) $ \dp *.* Access privileges for database "dbadmin" grantee | grantor | privileges_description | table_schema | table_name ---------+---------+--------------------------------------------+--------------+-----------user2 | dbadmin | INSERT. SELECT. SELECT | schema2 | events (4 rows) The vsql command \dp schemaname. REFERENCES | schema2 | events user1 | dbadmin | INSERT. for primary key. table_name. constraint_name | table_name | ordinal_position | table_schema ---------------------------+-------------------+------------------+-------------fk_store_sales_date | store_sales_fact | 1 | store -415- .* displays all tables in the named schema: $ \dp schema1. SELECT. UPDATE. table_schema FROM foreign_keys ORDER BY 3. Column Name CONSTRAINT_ID CONSTRAINT_NAME COLUMN_NAME ORDINAL_POSITION TABLE_NAME CONSTRAINT_TYPE TABLE_SCHEMA Data Type INTEGER VARCHA R VARCHA R VARCHA R VARCHA R VARCHA R VARCHA R Description The object ID assigned by the Vertica catalog. REFERENCES | schema1 | events user1 | dbadmin | SELECT | schema1 | events user2 | dbadmin | INSERT. Access privileges for database "dbadmin" Grantee | Grantor | Privileges | Schema | Table ---------+---------+--------------------------------------------+---------+-------user2 | dbadmin | INSERT. DELETE. REFERENCES | schema1 | events user1 | dbadmin | SELECT | schema1 | events (2 rows) Call the GRANTS table: SELECT * FROM GRANTS. The position of the constraint respective to other constraints in the table. The name of the constraint. p. The name of the schema Example Request specific columns from the PRIMARY_KEYS table: SELECT constraint_name. ordinal_position.

The unique numeric identification (OID) of the anchor table. Column Name PROJECTION_SCHEMA_ID PROJECTION_SCHEMA PROJECTION_ID PROJECTION_NAME OWNER_ID OWNER_NAME ANCHOR_TABLE_ID Data Type INTEGER VARCHA R INTEGER VARCHA R INTEGER VARCHA R INTEGER Description A unique numeric ID (OID) that identifies the specific schema that contains the projection. The name of the schema that contains the projection. for pre-join projections. ANCHOR_TABLE_NAME VARCHA R INTEGER NODE_ID -416- . A unique numeric ID (OID) that identifies the projection.SQL Reference Manual fk_online_sales_saledate fk_store_orders_product fk_inventory_date fk_inventory_product fk_store_sales_product fk_online_sales_shipdate fk_store_orders_product fk_inventory_product fk_store_sales_product fk_online_sales_product fk_store_orders_store fk_online_sales_product fk_inventory_warehouse fk_store_orders_vendor fk_store_sales_store fk_store_orders_employee fk_store_sales_promotion fk_online_sales_customer fk_store_sales_customer fk_online_sales_cc fk_store_sales_employee fk_online_sales_op fk_online_sales_shipping fk_online_sales_warehouse fk_online_sales_promotion (26 rows) | | | | | | | | | | | | | | | | | | | | | | | | | online_sales_fact store_orders_fact inventory_fact inventory_fact store_sales_fact online_sales_fact store_orders_fact inventory_fact store_sales_fact online_sales_fact store_orders_fact online_sales_fact inventory_fact store_orders_fact store_sales_fact store_orders_fact store_sales_fact online_sales_fact store_sales_fact online_sales_fact store_sales_fact online_sales_fact online_sales_fact online_sales_fact online_sales_fact | | | | | | | | | | | | | | | | | | | | | | | | | 1 1 1 2 2 2 2 3 3 3 3 4 4 4 4 5 5 5 6 6 7 7 8 9 10 | | | | | | | | | | | | | | | | | | | | | | | | | online_sales store public public store online_sales store public store online_sales store online_sales public store store store store online_sales store online_sales store online_sales online_sales online_sales online_sales PROJECTIONS Provides information about projections. The name of the projection. The name of the projection's owner. or the name of the table from which the projection was created if it isn't a pre-join projection. or the OID of the table from which the projection was created if it isn't a pre-join projection. A unique numeric ID (OID) that identifies the owner of the projection. or nodes. that contain the projection. The name of the anchor table. for pre-join projections. A unique numeric ID (OID) that identifies the node.

or nodes. is_prejoin. anchor_table_name. The epoch in which the projection was created. projection_name | anchor_table_name | is_prejoin | is_up_to_date ------------------------------+-----------------------+------------+--------------customer_dimension_site01 | customer_dimension | f | t customer_dimension_site02 | customer_dimension | f | t customer_dimension_site03 | customer_dimension | f | t customer_dimension_site04 | customer_dimension | f | t product_dimension_site01 | product_dimension | f | t product_dimension_site02 | product_dimension | f | t product_dimension_site03 | product_dimension | f | t product_dimension_site04 | product_dimension | f | t store_sales_fact_p1 | store_sales_fact | t | t store_sales_fact_p1_b1 | store_sales_fact | t | t store_orders_fact_p1 | store_orders_fact | t | t store_orders_fact_p1_b1 | store_orders_fact | t | t online_sales_fact_p1 | online_sales_fact | t | t online_sales_fact_p1_b1 | online_sales_fact | t | t promotion_dimension_site01 | promotion_dimension | f | t promotion_dimension_site02 | promotion_dimension | f | t promotion_dimension_site03 | promotion_dimension | f | t promotion_dimension_site04 | promotion_dimension | f | t date_dimension_site01 | date_dimension | f | t date_dimension_site02 | date_dimension | f | t date_dimension_site03 | date_dimension | f | t date_dimension_site04 | date_dimension | f | t vendor_dimension_site01 | vendor_dimension | f | t vendor_dimension_site02 | vendor_dimension | f | t vendor_dimension_site03 | vendor_dimension | f | t vendor_dimension_site04 | vendor_dimension | f | t employee_dimension_site01 | employee_dimension | f | t employee_dimension_site02 | employee_dimension | f | t employee_dimension_site03 | employee_dimension | f | t employee_dimension_site04 | employee_dimension | f | t shipping_dimension_site01 | shipping_dimension | f | t shipping_dimension_site02 | shipping_dimension | f | t shipping_dimension_site03 | shipping_dimension | f | t shipping_dimension_site04 | shipping_dimension | f | t warehouse_dimension_site01 | warehouse_dimension | f | t warehouse_dimension_site02 | warehouse_dimension | f | t warehouse_dimension_site03 | warehouse_dimension | f | t warehouse_dimension_site04 | warehouse_dimension | f | t inventory_fact_p1 | inventory_fact | f | t inventory_fact_p1_b1 | inventory_fact | f | t store_dimension_site01 | store_dimension | f | t -417- . is_up_to_date FROM projections. K-safety value for the projection. that contain the projection. Indicates whether or not the projection is a pre-join projection where t is true and f is false. Example SELECT projection_name. Projections must be up-to-date to be used in queries. Indicates whether the projection is current where t is true and f is false.SQL System Tables (Monitoring APIs) NODE_NAME IS_PREJOIN CREATED_EPOCH VERIFIED_FAULT_TOLERAN CE IS_UP_TO_DATE VARCHA R BOOLEA N INTEGER INTEGER BOOLEA N The name of the node.

constraint_type FROM table_constraints ORDER BY constraint_type. The number of foreign keys. The table ID assigned by Vertica. FOREIGN KEY. Column Name CONSTRAINT_ID CONSTRAINT_NAME Data Type VARCHAR VARCHAR Description The constraint object ID from the table assigned by Vertica. The schema object ID assigned by Vertica. 'f'. 'u' or 'd' which refer to 'check'. Is one of 'c'. or PRIMARY KEY. constraint_name | constraint_type ---------------------------+----------------fk_online_sales_promotion | f fk_online_sales_warehouse | f fk_online_sales_shipping | f fk_online_sales_op | f fk_online_sales_cc | f fk_online_sales_customer | f fk_online_sales_product | f fk_online_sales_shipdate | f -418- . 'primary'. 'foreign'. NOT NULL. OID of the foreign table referenced in a foreign key constraint (zero if not a foreign key constraint). CONSTRAINT_SCHEMA_ID CONSTRAINT_KEY_COUNT FOREIGN_KEY_COUNT TABLE_ID FOREIGN_TABLE_ID INTEGER INTEGER INTEGER INTEGER INTEGER CONSTRAINT_TYPE INTEGER Example The following command returns constraint column names and types against the VMart schema. SELECT constraint_name. 'p'.SQL Reference Manual store_dimension_site02 store_dimension_site03 store_dimension_site04 online_page_dimension_site01 online_page_dimension_site02 online_page_dimension_site03 online_page_dimension_site04 call_center_dimension_site01 call_center_dimension_site02 call_center_dimension_site03 call_center_dimension_site04 (52 rows) | | | | | | | | | | | store_dimension store_dimension store_dimension online_page_dimension online_page_dimension online_page_dimension online_page_dimension call_center_dimension call_center_dimension call_center_dimension call_center_dimension | | | | | | | | | | | f f f f f f f f f f f | | | | | | | | | | | t t t t t t t t t t t TABLE_CONSTRAINTS Provides information about table constraints. if specified: UNIQUE. The name of the constraint. 'unique' and 'determines'. respectively. The number of constraint keys.

The name of the schema The unique table OID assigned by the Vertica catalog The name of the table The owner ID from the catalog. such as Designer. The name of the user who created the table. 't' for Vertica system tables" The name of the process that creates the table. Is 'f' for user-created tables. Column Name TABLE_SCHEMA_ID TABLE_SCHEMA TABLE_ID TABLE_NAME OWNER_ID OWNER_NAME IS_SYSTEM_TABLE SYSTEM_TABLE_CREATOR Data Type INTEGER VARCHAR INTEGER VARCHAR INTEGER VARCHAR BOOLEAN VARCHAR Description The schema ID from the catalog. Use the ILIKE predicate instead: -419- .SQL System Tables (Monitoring APIs) fk_online_sales_saledate fk_store_orders_employee fk_store_orders_vendor fk_store_orders_store fk_store_orders_product fk_store_sales_employee fk_store_sales_customer fk_store_sales_promotion fk_store_sales_store fk_store_sales_product fk_store_sales_date fk_inventory_warehouse fk_inventory_product fk_inventory_date (33 rows) | | | | | | | | | | | | | | | | | | | | | | | | | f f f f f f f f f f f f f f p p p p p p p p p p p See Also Adding Constraints in the Administrator's Guide ANALYZE_CONSTRAINTS (page 241) TABLES Provides information about all tables in the database. Notes The TABLE_SCHEMA and TABLE_NAME columns are case sensitive when executing queries that use the equality (=) predicate.

table_name FROM v_catalog. The data type name. owner_name.tables WHERE table_schema ILIKE 'schema1'. table_name. Column Name TYPE_ID TYPE_NAME Data Type INTEGER VARCHAR Description The ID assigned to a specific data type. Example SELECT * FROM types. is_system_table FROM TABLES. table_schema | table_name | owner_name | is_system_table --------------+-----------------------+------------+----------------public | customer_dimension | release | f public | product_dimension | release | f public | promotion_dimension | release | f public | date_dimension | release | f public | vendor_dimension | release | f public | employee_dimension | release | f public | shipping_dimension | release | f public | warehouse_dimension | release | f public | inventory_fact | release | f store | store_dimension | release | f store | store_sales_fact | release | f store | store_orders_fact | release | f online_sales | online_page_dimension | release | f online_sales | call_center_dimension | release | f online_sales | online_sales_fact | release | f (15 rows) TYPES Provides information about supported data types. type_id | type_name ---------+------------5 | Boolean 6 | Integer 7 | Float 8 | Char 9 | Varchar 10 | Date 11 | Time 12 | Timestamp 13 | TimestampTz -420- .SQL Reference Manual SELECT table_schema. Example The following command returns information on all tables in the Vmart schema: SELECT table_schema.

2) The data type unique ID assigned by the Vertica catalog The maximum allowable length for the data type The maximum allowable length for the column. for example. Example SELECT * FROM users. user_id | user_name | is_super_user -------------------+-----------+--------------45035996273704962 | dbadmin | t 45035996273767658 | ajax | f (2 rows) VIEW_COLUMNS Provides view attribute information. The name of the user. valid for character types The number of significant decimal digits -421- . Column Name USER_ID USER_NAME IS_SUPER_USER Data Type INTEGER VARCHAR VARCHAR Description The user ID assigned by Vertica. NUMERIC(10. Indicates whether the current user is superuser.SQL System Tables (Monitoring APIs) 14 | 15 | 16 | 17 | 117 | (14 rows) Interval TimeTz Numeric Varbinary Binary USERS Provides information about all users in the database. where t is true and f is false. Column Name TABLE_ID TABLE_SCHEMA TABLE_NAME COLUMN_NAME DATA_TYPE DATA_TYPE_DESCRIPTION DATA_TYPE_ID DATA_TYPE_LENGTH CHARACTER_MAXIMUM_LENG TH NUMERIC_PRECISION Data Type Description The unique table OID assigned by the Vertica catalog of the view VARCHAR VARCHAR VARCHAR VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER The name of the schema The name of the table The name of the column being returned The data type of the column being returned A description of the data type.

See Using Views for more information. where t is true and f is false. The name of the view owner. -422- . The owner ID assigned by Vertica.SQL Reference Manual NUMERIC_SCALE DATETIME_PRECISION INTERVAL_PRECISION ORDINAL_POSITION INTEGER INTEGER INTEGER VARCHAR The number of fractional digits The number of fractional digits retained in the seconds field The number of fractional digits retained in the seconds field The position of the column Example SELECT * FROM view_columns. The query used to define the view. -[ RECORD 5 ]------------+------------------------table_id | 45035996273833232 table_schema | public table_name | myview column_name | category_description data_type | Char data_type_description | Char(32) data_type_id | 8 data_type_length | 32 character_maximum_length | 32 numeric_precision | numeric_scale | datetime_precision | interval_precision | ordinal_position | 5 VIEWS Provides information about all views within the system. The following is a small portion of the entire result set for illustration purposes. The name of the schema that contains the view. The table ID assigned by Vertica. The table name. NULL fields indicate that those columns were not defined. Indicates whether the table is a system view. Column Name TABLE_SCHEMA_ID TABLE_SCHEMA TABLE_ID TABLE_NAME OWNER_ID OWNER_NAME VIEW_DEFINITION IS_SYSTEM_VIEW Data Type INTEGER VARCHA R INTEGER VARCHA R INTEGER VARCHA R VARCHA R VARCHA R Description The schema ID assigned by Vertica.

-[ RECORD 1 ]-------+-------------------------------------------------------------table_schema_id | 45035996273704963 table_schema | public table_id | 45035996273823130 table_name | temp owner_id | 45035996273704961 owner_name | release view_definition | SELECT to_date('F'::character varying. Example Call all the system tables: SELECT * FROM system_tables.customer_dimension is_system_view | f system_view_creator | SYSTEM_TABLES Displays a list of all system table names.SQL System Tables (Monitoring APIs) SYSTEM_VIEW_CREATOR VARCHA R The user name who created the view. Example Call the VIEWS table: SELECT * FROM VIEWS. Column Name TABLE_NAME TABLE_DESCRIPTION Data Type VARCHAR VARCHAR Description The name of the table A description of the system table's purpose. 'dd mm yyyy'::character varying) AS to_date FROM public. table_schema | table_name | table_description --------------+---------------------------+---------------------------------------------------------------------------v_catalog | columns | Table column information v_catalog | foreign_keys | Foreign key information v_catalog | grants | Grant information v_catalog | primary_keys | Primary key information v_catalog | projections | Projection information v_catalog | system_tables | Displays a list of all system tables except internal ones v_catalog | table_constraints | Constraint information v_catalog | tables | Table information v_catalog | types | Information about supported data types v_catalog | users | User information v_catalog | view_columns | View attribute information v_catalog | views | View information v_monitor | active_events | Displays all of the active events in the cluster v_monitor | column_storage | Information on amount of disk storage used by each column/projection/node v_monitor | current_session | Information on current Session v_monitor | disk_resource_rejections | Disk resource rejection summarizations v_monitor | disk_storage | Disk usage information v_monitor | event_configurations | Current event configuration v_monitor | execution_engine_profiles | Per EE operator profiling information v_monitor | host_resources | Per host profiling information v_monitor | load_streams | Load metrics for each load stream on each node v_monitor | local_nodes | Local node information -423- .

See Event Types for a list of event type codes. These events are based on standard syslog severity types. day. and time the event expire. month.SQL Reference Manual v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor v_monitor (38 rows) | | | | | | | | | | | | | | | locks node_resources partitions projection_refreshes projection_storage query_metrics query_profiles resource_rejections resource_usage session_profiles sessions storage_containers system tuple_mover_operations wos_container_storage | | | | | | | | | | | | | | | Lock grants and requests for all nodes Per node profiling information Partition metadata Refresh information on each projection Storage information on each projection Summarized query information Query profiling Resource rejection summarizations Resource usage information Per session profiling information Information on each session Information on each storage container System level information Information about (automatic) Tuple Mover Storage information on WOS allocator V_MONITOR Schema The system tables in this section reside in the v_monitor schema. The severity of the event from highest to lowest. A unique numeric ID that identifies the specific event. 0—Emergency 1—Alert 2—Critical 3—Error 4—Warning 5—Notice 6—Informational 7—Debug The year. month. The year. and time the event was reported. The time is posted in military time. NODE_NAME EVENT_CODE EVENT_ID EVENT_SEVERITY EVENT_POSTED_TIMESTAMP EVENT_EXPIRATION VARCHA R VARCHA -424- . Column Name CURRENT_TIMESTAMP Data Type VARCHA R VARCHA R INTEGER INTEGER VARCHA R Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). See Monitoring Events. The name of the node that is reporting the requested information. day. A numeric ID that indicates the type of event. ACTIVE_EVENTS Displays all active events in the cluster.

The event logging mechanisms that are configured for Vertica.008458 event_expiration | 2077-08-29 11:52:46.008458 event_code_description | Node State Change event_problem_description | Changing node site01 startup state to UP reporting_node | site01 event_sent_to_channels | Vertica Log event_posted_count | 1 -[ RECORD 2 ]-------------+----------------------------------------current_timestamp | 2009-08-11 14:38:34. Vertica posts the event once and then counts the number of additional instances in which the event occurs. Tracks the number of times an event occurs. Rather than posting the same event multiple times.018172 event_expiration | 2077-08-29 11:52:46.859987 node_name | site03 event_code | 6 event_id | 6 -425- . and SNMP.226377 node_name | site02 event_code | 6 event_id | 6 event_severity | Informational is_event_posted | 2009-08-11 09:38:39. the event is posted again. (configured by default) syslog. A generic description of the event. These can include vertica.083285 node_name | site01 event_code | 6 event_id | 6 event_severity | Informational is_event_posted | 2009-08-11 09:38:39. If the cause of the event is still active. A brief description of the event and details pertinent to the specific situation.log.018172 event_code_description | Node State Change event_problem_description | Changing node site02 startup state to UP reporting_node | site02 event_sent_to_channels | Vertica Log event_posted_count | 1 -[ RECORD 3 ]-------------+----------------------------------------current_timestamp | 2009-08-11 14:38:48. EVENT_POSTED_COUNT Example Call the ACTIVE_EVENTS table: SELECT * FROM active_events. -[ RECORD 1 ]-------------+----------------------------------------current_timestamp | 2009-08-11 14:38:18. The name of the node within the cluster that reported the event.SQL System Tables (Monitoring APIs) R EVENT_CODE_DESCRIPTION EVENT_PROBLEM_DESCRIPT ION REPORTING_NODE EVENT_SENT_TO_CHANNELS VARCHA R VARCHA R VARCHA R VARCHA R INTEGER The time is posted in military time.

The number of ROS bytes in the column. The disk storage allocation of the column in bytes. The associated projection name.027258 event_code_description | Node State Change event_problem_description | Changing node site03 startup state to UP reporting_node | site03 event_sent_to_channels | Vertica Log event_posted_count | 1 -[ RECORD 4 ]-------------+----------------------------------------current_timestamp | 2009-08-11 14:39:04.SQL Reference Manual event_severity | Informational is_event_posted | 2009-08-11 09:38:39.027258 event_expiration | 2077-08-29 11:52:46. The name of the schema associated with the projection.008288 event_code_description | Node State Change event_problem_description | Changing node site04 startup state to UP reporting_node | site04 event_sent_to_channels | Vertica Log event_posted_count | 1 COLUMN_STORAGE Returns the amount of disk storage used by each column of each projection on each node. The number of ROS containers.008288 event_expiration | 2077-08-29 11:52:46. NODE_NAME COLUMN_NAME ROW_COUNT USED_BYTES WOS_ROW_COUNT ROS_ROW_COUNT ROS_USED_BYTES ROS_COUNT PROJECTION_NAME PROJECTION_SCHEMA VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER VARCHAR VARCHAR -426- . The number of ROS rows in the column. The number of WOS rows in the column. The name of the node that is reporting the requested information. The number of rows in the column (cardinality). A projection column name.226379 node_name | site04 event_code | 6 event_id | 6 event_severity | Informational is_event_posted | 2009-08-11 09:38:39. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).

. column_name | row_count | projection_name | anchor_table_name -427- . -[ RECORD 1 ]-------+--------------------------------------current_timestamp | 2009-08-11 14:40:30.298898 node_name | site01 column_name | cc_city row_count | 200 used_bytes | 1196 wos_row_count | 0 ros_row_count | 200 ros_used_bytes | 1196 ros_count | 1 projection_name | call_center_dimension_site01 projection_schema | online_sales anchor_table_name | call_center_dimension anchor_table_schema | online_sales -[ RECORD 4 ]-------+--------------------------------------. projection_name.SQL System Tables (Monitoring APIs) ANCHOR_TABLE_NAME ANCHOR_TABLE_SCHEMA VARCHAR VARCHAR The associated table name. The associated table's schema name. row_count. so per-column byte counts are not available.. Notes WOS data is stored by row.12043 node_name | site01 column_name | cc_address row_count | 200 used_bytes | 1402 wos_row_count | 0 ros_row_count | 200 ros_used_bytes | 1402 ros_count | 1 projection_name | call_center_dimension_site01 projection_schema | online_sales anchor_table_name | call_center_dimension anchor_table_schema | online_sales -[ RECORD 3 ]-------+--------------------------------------current_timestamp | 2009-08-11 14:41:11. Call specific columns from the COLUMN_STORAGE table: SELECT column_name. anchor_table_name FROM COLUMN_STORAGE WHERE node_name = 'site02' AND row_count = 1000. Example Call the COLUMN_STORAGE table: SELECT * FROM COLUMN_STORAGE.549209 node_name | site01 column_name | call_center_key row_count | 200 used_bytes | 277 wos_row_count | 0 ros_row_count | 200 ros_used_bytes | 277 ros_count | 1 projection_name | call_center_dimension_site01 projection_schema | online_sales anchor_table_name | call_center_dimension anchor_table_schema | online_sales -[ RECORD 2 ]-------+--------------------------------------current_timestamp | 2009-08-11 14:40:58.

This column can be useful for identifying sessions that have been left open and could be idle. NULL if the session is internal The date and time the user logged into the database or when the internal session was created. This identifier is unique within the cluster at any point in time but can be reused when the session closes. You can use this table to find out the current session's sessionID and get the duration of the previously-run query. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). otherwise NULL. The identifier required to close or interrupt a session. A description of the current transaction. The name of the node that is reporting the requested information. A string containing the hexadecimal representation of the transaction ID. NODE_NAME USER_NAME CLIENT_HOSTNAME VARCHAR VARCHAR VARCHAR LOGIN_TIMESTAMP TIMESTAMP SESSION_ID VARCHAR TRANSACTION_START TRANSACTION_ID TRANSACTION_DESCRIPTI TIMESTAMP VARCHAR VARCHAR -428- .SQL Reference Manual ----------------------+-----------+------------------------------+----------------------end_date | 1000 | online_page_dimension_site02 | online_page_dimension epoch | 1000 | online_page_dimension_site02 | online_page_dimension online_page_key | 1000 | online_page_dimension_site02 | online_page_dimension page_description | 1000 | online_page_dimension_site02 | online_page_dimension page_number | 1000 | online_page_dimension_site02 | online_page_dimension page_type | 1000 | online_page_dimension_site02 | online_page_dimension start_date | 1000 | online_page_dimension_site02 | online_page_dimension ad_media_name | 1000 | promotion_dimension_site02 | promotion_dimension ad_type | 1000 | promotion_dimension_site02 | promotion_dimension coupon_type | 1000 | promotion_dimension_site02 | promotion_dimension display_provider | 1000 | promotion_dimension_site02 | promotion_dimension display_type | 1000 | promotion_dimension_site02 | promotion_dimension epoch | 1000 | promotion_dimension_site02 | promotion_dimension price_reduction_type | 1000 | promotion_dimension_site02 | promotion_dimension promotion_begin_date | 1000 | promotion_dimension_site02 | promotion_dimension promotion_cost | 1000 | promotion_dimension_site02 | promotion_dimension promotion_end_date | 1000 | promotion_dimension_site02 | promotion_dimension promotion_key | 1000 | promotion_dimension_site02 | promotion_dimension promotion_media_type | 1000 | promotion_dimension_site02 | promotion_dimension promotion_name | 1000 | promotion_dimension_site02 | promotion_dimension 20 rows) CURRENT_SESSION Returns information about the current active session. if any. The name used to log into the database or NULL if the session is internal The host name and port of the TCP socket from which the client connection was made. The date/time the current transaction started or NULL if no transaction is running.

if any.SQL System Tables (Monitoring APIs) ON STATEMENT_START STATEMENT_ID TIMESTAMP VARCHAR The date/time the current statement started execution.conf). set the parameter to '1'. Global' when on by default for all sessions and on for current session. Results are: Empty when no profiling 'Local' when profiling on for this session 'Global' when on by default for all sessions 'Local. Returns a value that indicates whether profiling is turned on. To turn profiling off.486908 session_id | fc10-1-16482:0x7586 -429- .0. 'Global' when on by default for sessions and 'Local. The currently executing statement. LAST_STATEMENT_DURATI ON_US CURRENT_STATEMENT LAST_STATEMENT INTEGER VARCHAR VARCHAR EXECUTION_ENGINE_PROF ILING_CONFIGURATION VARCHAR QUERY_PROFILING_CONFI GURATION VARCHAR SESSION_PROFILING_CON FIGURATION VARCHAR Notes • • The default for profiling is ON ('1') for all sessions. Global' when on by default for all sessions and on for current session Returns a value that indicates whether profiling is turned on. Profiling parameters (such as GlobalEEProfiling in the examples below) are set in the Vertica configuration file (vertica.0. Examples Call the CURRENT_SESSION table: -[ RECORD 1 ]----------------------------+--------------------------------------------current_timestamp | 2009-08-11 14:44:25. set the parameter to '0'. An ID for the currently executing statement. 'Local' when on for this session.1:36674 login_timestamp | 2009-08-11 13:37:59. NULL indicates that no statement is currently being processed. 'Global' when on by default for sessions and 'Local. 'Local' when on for this session.719724 node_name | site01 user_name | release client_hostname | 127. The duration of the last completed statement in microseconds. empty when no profiling. NULL otherwise NULL if the user has just logged in. Global' when on by default for all sessions and on for current session. or NULL if no statement is running. To turn profiling on. otherwise the currently running statement or the most recently completed statement. Each session can turn profiling ON or OFF. Returns a value that indicates whether profiling is turned on. empty when no profiling.

set_config_parameter ---------------------------Parameter set successfully (1 row) The following command tells you whether profiling is set to 'Local' or 'Global' or none: SELECT execution_engine_profiling_configuration FROM CURRENT_SESSION. enable_profiling ---------------------EE Profiling Enabled (1 row) -430- . This command disables EE profiling for query execution runs: SELECT disable_profiling('EE'). session_id. Global Request specific columns from the table: SELECT node_name.709802 42949673011 43865 select * from current_session.SQL Reference Manual transaction_start transaction_id transaction_description statement_start statement_id last_statement_duration_sec current_statement last_statement execution_engine_profiling_configuration query_profiling_configuration session_profiling_configuration | | | | | | | | | | | 2009-08-11 14:23:15. which turns off profiling: SELECT set_config_parameter('GlobalEEProfiling'. This command now enables EE profiling for query execution runs: SELECT enable_profiling('EE'). node_name | session_id | execution_engine_profiling_configuration -----------+---------------------+-----------------------------------------site01 | fc10-1-16482:0x2523 | Global (1 row) The sequence of commands in this example shows the use of disabling and enabling profiling for local and global sessions. select * from column_storage.014816 0xa000000001a440 user release (select * from node_resources. disable_profiling ----------------------EE Profiling Disabled (1 row) The following command sets the GlobalEEProfiling configuration parameter to 0. execution_engine_profiling_configuration FROM CURRENT_SESSION. ee_profiling_config --------------------(1 row) Note: The result set is empty because profiling was turned off in the preceding example. '0').) 2009-08-11 14:44:25.

SQL System Tables (Monitoring APIs) Now when you run a select on the CURRENT_SESSION table. and SESSION_PROFILES (page 457) DISK_RESOURCE_REJECTIONS Returns requests for resources that are rejected due to disk space shortages. you can see profiling is ON for the local session: SELECT execution_engine_profiling_configuration FROM CURRENT_SESSION. ee_profiling_config --------------------Local. QUERY_PROFILES (page 450). The time of first request rejection for this requester. as well as for all sessions: SELECT execution_engine_profiling_configuration FROM CURRENT_SESSION. The name of the node that is reporting the requested information. NODE_NAME ACCUMULATION_START REQUEST_TYPE DISK_SPACE_REJECTED_COUNT VARCHAR VARCHAR VARCHAR INTEGER -431- . you can see profiling is ON for the local sessions. ee_profiling_config --------------------Local (1 row) Now turn profiling on for all sessions by setting the GlobalEEProfiling configuration parameter to 1: SELECT set_config_parameter('GlobalEEProfiling'. The resource request requester (example: plan type). set_config_parameter ---------------------------Parameter set successfully (1 row) Now when you run a select on the CURRENT_SESSION table. Global (1 row) See Also EXECUTION_ENGINE_PROFILES (page 436). The total number of disk space resource requests rejected. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). '1').

1/latency is the time taken to seek to the data. See Creating and Configuring Storage Locations in the Administrator's Guide. The name of the node that is reporting the requested information. node_name | request_type | disk_space_rejected_count | failed_volume_rejected_count -----------+--------------+---------------------------+-----------------------------(0 rows) DISK_STORAGE Returns the amount of disk storage used by the database on each node. request_type. Column Name CURRENT_TIMESTAMP Date Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). failed_volume_rejected_count FROM DISK_RESOURCE_REJECTIONS. and partitions are stored on different disks based on predicted or measured access patterns. NODE_NAME STORAGE_PATH STORAGE_USAGE VARCHAR VARCHAR VARCHAR RANK INTEGER THROUGHPUT INTEGER LATENCY INTEGER STORAGE_STATUS VARCHAR -432- . The path where the storage location is mounted. columns.TEMP: Both types of files are stored in the location. The status of the storage location: active or retired. disk_space_rejected_count. DATA. Example The result of no rows in the following example means that there were no disk rejections: SELECT node_name. The rank assigned to the storage location based on its performance. 1/throughput is the time taken to read 1MB of data. The type of information stored in the location: DATA: Only data is stored in the location. The measure of a storage location's performance in seeks/sec. The measure of a storage location's performance in MB/sec. TEMP: Only temporary files that are created during loads or queries are stored in the location. Ranks are used to create a tiered disk architecture in which projections.SQL Reference Manual FAILED_VOLUME_REJECTED_CO UNT INTEGER The total number of disk space resource requests on a failed volume.

block size. The number of megabytes of disk storage in use.884255 node_name | site01 storage_path | /mydb/site01_data storage_usage | DATA. The percentage of free disk space remaining. etc. This information is useful in letting you know where the data files reside. There can be multiple storage locations per node. The number of disk blocks in use. § ReadTime is the time taken to read 1MB of data. • • Example Call the DISK_STORAGE table: SELECT * FROM DISK_STORAGE. -[ RECORD 1 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:48:35. and these locations can be on different disks with different free/used space.SQL System Tables (Monitoring APIs) DISK_BLOCK_SIZE_BYTES DISK_SPACE_USED_BLOCKS DISK_SPACE_USED_MB DISK_SPACE_FREE_BLOCKS DISK_SPACE_FREE_MB DISK_SPACE_FREE_PERCEN T INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER The block size of the disk in bytes.TEMP rank | 0 throughput | 0 latency | 0 storage_status | Active -433- . The number of free disk blocks available. A disk is faster than another disk if its ReadTime is less. Notes • • A storage location's performance is measured in throughput in MB/sec and latency in seeks/sec.932541 node_name | site01 storage_path | /mydb/site01_catalog/Catalog storage_usage | CATALOG rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 34708721 disk_space_used_mb | 135581 disk_space_free_blocks | 178816678 disk_space_free_mb | 698502 disk_space_free_percent | 83% -[ RECORD 2 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:48:53. These two values are converted to single number(Speed) with the following formula: ReadTime (time to read 1MB) = 1/throughput + 1 / latency § 1/throughput is the time taken to read 1MB of data § 1/latency is the time taken to seek to the data. The number of megabytes of free storage available.

299012 node_name | site02 storage_path | /mydb/site02_catalog/Catalog storage_usage | CATALOG rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19968349 disk_space_used_mb | 78001 disk_space_free_blocks | 193557050 disk_space_free_mb | 756082 disk_space_free_percent | 90% -[ RECORD 4 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:49:22.SQL Reference Manual disk_block_size_bytes | 4096 disk_space_used_blocks | 34708721 disk_space_used_mb | 135581 disk_space_free_blocks | 178816678 disk_space_free_mb | 698502 disk_space_free_percent | 83% -[ RECORD 3 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:49:08.398879 -434- .415735 node_name | site03 storage_path | /mydb/site03_data storage_usage | DATA.960157 node_name | site03 storage_path | /mydb/site03_catalog/Catalog storage_usage | CATALOG rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19902595 disk_space_used_mb | 77744 disk_space_free_blocks | 193622804 disk_space_free_mb | 756339 disk_space_free_percent | 90% -[ RECORD 6 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:50:27.TEMP rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19968349 disk_space_used_mb | 78001 disk_space_free_blocks | 193557050 disk_space_free_mb | 756082 disk_space_free_percent | 90% -[ RECORD 5 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:50:03.TEMP rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19902595 disk_space_used_mb | 77744 disk_space_free_blocks | 193622804 disk_space_free_mb | 756339 disk_space_free_percent | 90% -[ RECORD 7 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:50:39.696772 node_name | site02 storage_path | /mydb/site02_data storage_usage | DATA.

Example SELECT * FROM event_configurations.879302 node_name | site04 storage_path | /mydb/site04_data storage_usage | DATA. The delivery channel on which the event occurred.TEMP rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19972309 disk_space_used_mb | 78017 disk_space_free_blocks | 193553090 disk_space_free_mb | 756066 disk_space_free_percent | 90% Request only specific columns from the table: SELECT node_name. disk_space_free_percent FROM disk_storage. node_name | storage_path | storage_status | disk_space_free_percent -----------+------------------------------+----------------+------------------------site01 | /mydb/site01_catalog/Catalog | Active | 83% site01 | /mydb/site01_data | Active | 83% site02 | /mydb/site02_catalog/Catalog | Active | 90% site02 | /mydb/site02_data | Active | 90% site03 | /mydb/site03_catalog/Catalog | Active | 90% site03 | /mydb/site03_data | Active | 90% site04 | /mydb/site04_catalog/Catalog | Active | 90% site04 | /mydb/site04_data | Active | 90% (8 rows) EVENT_CONFIGURATIONS Monitors the configuration of events. event_id | event_delivery_channels -------------------------------------------+------------------------- -435- .SQL System Tables (Monitoring APIs) node_name | site04 storage_path | /mydb/site04_catalog/Catalog storage_usage | CATALOG rank | 0 throughput | 0 latency | 0 storage_status | Active disk_block_size_bytes | 4096 disk_space_used_blocks | 19972309 disk_space_used_mb | 78017 disk_space_free_blocks | 193553090 disk_space_free_mb | 756066 disk_space_free_percent | 90% -[ RECORD 8 ]-----------+--------------------------------------------current_timestamp | 2009-08-11 14:50:57. Column Name EVENT_ID EVENT_DELIVERY_CHANNELS Date Type VARCHAR VARCHAR Description The name of the event. storage_status. storage_path.

The name of the user who started the session. See COUNTER_NAME Values below. To obtain information about query execution runs for your database. The ID of the user who started the session. NODE_NAME SESSION_ID TRANSACTION_ID STATEMENT_ID INTEGER INTEGER OPERATOR_NAME OPERATOR_ID COUNTER_NAME COUNTER_VALUE VARCHA R INTEGER VARCHA R INTEGER COUNTER_NAME Values The value of COUNTER_NAME can be any of the following: -436- . SNMP Read Only File System | Vertica Log. SNMP Node State Change | Vertica Log.SQL Reference Manual Low Disk Space | Vertica Log. SNMP Loss Of K Safety | Vertica Log. The value of the counter. SNMP Recovery Error | Vertica Log Recovery Lock Error | Vertica Log Recovery Projection Retrieval Error | Vertica Log Refresh Error | Vertica Log Refresh Lock Error | Vertica Log Tuple Mover Error | Vertica Log Timer Service Task Error | Vertica Log (15 rows) Trap Trap Trap Trap Trap Trap Trap Trap EXECUTION_ENGINE_PROFILES Provides information regarding query execution runs. Column Name CURRENT_TIMESTAMP Data Type VARCHA R VARCHA R VARCHA R Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The name of the counter. SNMP Recovery Failure | Vertica Log. NULL indicates that no statement is currently being processed. otherwise NULL. SNMP Current Fault Tolerance at Critical Level | Vertica Log. The name of the node that is reporting the requested information. see Profiling Database Performance. This identifier is unique within the cluster at any point in time but can be reused when the session closes. An ID for the currently executing statement. The identification of the session for which profiling information is captured. SNMP WOS Over Flow | Vertica Log. SNMP Too Many ROS Containers | Vertica Log. An identifier for the transaction within the session if any.

counter_value FROM EXECUTION_ENGINE_PROFILES WHERE operator_name = 'CopyNode' ORDER BY counter_value DESC. counter_name. The number of rows produced by the EE operator. Example SELECT operator_name. -437- .SQL System Tables (Monitoring APIs) COUNTER_NAME bytes sent bytes received rows produced executable time (ms) Description The number of bytes sent over the network for the query execution. operator_name | operator_id | counter_name | counter_value ---------------+-------------+------------------------+--------------CopyNode | 3 | rows produced | 219 CopyNode | 1 | rows produced | 95 CopyNode | 1 | rows produced | 53 CopyNode | 1 | rows produced | 34 CopyNode | 1 | rows produced | 20 CopyNode | 2 | rows produced | 15 CopyNode | 1 | execution time (us) | 1 CopyNode | 1 | execution time (us) | 1 CopyNode | 3 | execution time (us) | 1 CopyNode | 1 | execution time (us) | 0 CopyNode | 1 | output queue wait (us) | 0 CopyNode | 3 | output queue wait (us) | 0 CopyNode | 1 | output queue wait (us) | 0 CopyNode | 1 | execution time (us) | 0 CopyNode | 1 | output queue wait (us) | 0 CopyNode | 1 | output queue wait (us) | 0 CopyNode | 2 | execution time (us) | 0 CopyNode | 2 | output queue wait (us) | 0 (18 rows) HOST_RESOURCES Provides a snapshot of the node. This is useful for regularly polling the node with automated tools or scripts. The time required to execute the query (in milliseconds). The number of bytes received over the network for the query execution. operator_id.

The total number of open sockets on the node. The disk space used. The maximum core file size allowed on the node. for all storage location file systems (data directories).SQL Reference Manual Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The number of system processors. in bytes. The maximum number of files that can be open at one time on the node. The name of the node that is reporting the requested information. on the system. on the system. The amount of physical RAM. The amount of physical RAM. It is not an open file or socket. used for file buffers on the system The amount of physical RAM. available on the system. The total amount of physical RAM. in bytes. used as cache memory on the system. in bytes. in bytes. The total amount of swap memory available. in megabytes. in megabytes. The total number of other file descriptions open in which other could be a directory or FIFO. The total free disk space available. A description of the processor. in bytes. in megabytes. The total amount of swap memory free. for all storage location file systems. The number of processor cores in the system. for all storage location file systems. left unused by the system. in bytes. The free disk space available. HOST_NAME OPEN_FILES_LIMIT THREADS_LIMIT CORE_FILE_LIMIT_MAX_SIZE _BYTES PROCESSOR_COUNT PROCESSOR_CORE_COUNT PROCESSOR_DESCRIPTION OPENED_FILE_COUNT OPENED_SOCKET_COUNT OPENED_NONFILE_NONSOCKET _COUNT TOTAL_MEMORY_BYTES TOTAL_MEMORY_FREE_BYTES TOTAL_BUFFER_MEMORY_BYTE S TOTAL_MEMORY_CACHE_BYTES TOTAL_SWAP_MEMORY_BYTES TOTAL_SWAP_MEMORY_FREE_B YTES DISK_SPACE_FREE_MB DISK_SPACE_USED_MB DISK_SPACE_TOTAL_MB VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER -438- .10GHz (1 row) The total number of open files on the node. The maximum number of threads that can coexist on the node. For example: Inter(R) Core(TM)2 Duo CPU T8100 @2.

com open_files_limit | 65536 threads_limit | 139264 core_file_limit_max_size_bytes | 4096872448 processor_count | 1 processor_core_count | 4 processor_description | Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.com open_files_limit | 65536 threads_limit | 139264 core_file_limit_max_size_bytes | 4096872448 processor_count | 1 processor_core_count | 4 processor_description | Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz opened_file_count | 5 opened_socket_count | 5 opened_nonfile_nonsocket_count | 3 total_memory_bytes | 8393220096 total_memory_free_bytes | 4652384256 total_buffer_memory_bytes | 664379392 total_memory_cache_bytes | 1128968192 total_swap_memory_bytes | 4293586944 total_swap_memory_free_bytes | 4280012800 disk_space_free_mb | 664355 disk_space_used_mb | 169728 disk_space_total_mb | 834083 -[ RECORD 2 ]------------------+----------------------------current_timestamp | 2009-09-01 18:20:06.vertica.SQL System Tables (Monitoring APIs) Examples Call the HOST_RESOURCES table: SELECT * FROM HOST_RESOURCES.40GHz opened_file_count | 5 opened_socket_count | 4 opened_nonfile_nonsocket_count | 3 total_memory_bytes | 8393220096 total_memory_free_bytes | 4144705536 total_buffer_memory_bytes | 2606940160 total_memory_cache_bytes | 818905088 -439- . -[ RECORD 1 ]------------------+----------------------------current_timestamp | 2009-09-01 18:20:06.951431 host_name | fc10-2.vertica.com open_files_limit | 65536 threads_limit | 139264 core_file_limit_max_size_bytes | 4096872448 processor_count | 1 processor_core_count | 4 processor_description | Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.961542 host_name | fc10-3.vertica.40GHz opened_file_count | 5 opened_socket_count | 4 opened_nonfile_nonsocket_count | 3 total_memory_bytes | 8393220096 total_memory_free_bytes | 4610568192 total_buffer_memory_bytes | 2133594112 total_memory_cache_bytes | 870674432 total_swap_memory_bytes | 4293586944 total_swap_memory_free_bytes | 4288507904 disk_space_free_mb | 756833 disk_space_used_mb | 77250 disk_space_total_mb | 834083 -[ RECORD 3 ]------------------+----------------------------current_timestamp | 2009-09-01 18:20:06.952211 host_name | fc10-1.

this column remains at zero (0) until the COPY statement is complete. The name of the table being loaded. -440- . The optional identifier that names a stream. The Linux system time when the load started. If using STDIN. Column Name CURRENT_TIMESTAMP Date Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).40GHz opened_file_count | 5 opened_socket_count | 4 opened_nonfile_nonsocket_count | 3 total_memory_bytes | 8393220096 total_memory_free_bytes | 4375576576 total_buffer_memory_bytes | 2455642112 total_memory_cache_bytes | 764538880 total_swap_memory_bytes | 4293586944 total_swap_memory_free_bytes | 4292378624 disk_space_free_mb | 756898 disk_space_used_mb | 77185 disk_space_total_mb | 834083 LOAD_STREAMS Monitors load metrics for each load stream on each node. The number of rows loaded.vertica. if specified. The number of rows rejected. The size of the input file in bytes.957279 host_name | fc10-4.SQL Reference Manual total_swap_memory_bytes | 4293586944 total_swap_memory_free_bytes | 4283015168 disk_space_free_mb | 756955 disk_space_used_mb | 77128 disk_space_total_mb | 834083 -[ RECORD 4 ]------------------+----------------------------current_timestamp | 2009-09-01 18:20:06. The number of bytes read from the input file. STREAM_NAME TABLE_NAME LOAD_START ACCEPTED_ROW_COUNT REJECTED_ROW_COUNT READ_BYTES INPUT_FILE_SIZE_BYTES VARCHAR VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER Note: When using STDIN as input size of input file size is zero (0).com open_files_limit | 65536 threads_limit | 139264 core_file_limit_max_size_bytes | 4096872448 processor_count | 1 processor_core_count | 4 processor_description | Intel(R) Core(TM)2 Quad CPU Q6600 @ 2. PARSE_COMPLETE_PERCENT INTEGER The percent of the rows in the input file that have been loaded.

The number of rows that have been sorted. However.SQL System Tables (Monitoring APIs) UNSORTED_ROW_COUNT SORTED_ROW_COUNT SORT_COMPLETE_PERCENT INTEGER INTEGER INTEGER The number of rows that have not been sorted. This can take a significant amount of time and it is easy to mistake this state as a hang. DIRECT is in progress. -441- . Example Call the LOAD_STREAMS table: SELECT * FROM load_streams LOCAL_NODES Monitors the status of local nodes in the cluster. Column Name NODE_ID SHUTDOWN_EPOCH BACKUP_SHUTDOWN_EPOCH Data Type INTEGER INTEGER INTEGER Description The node ID assigned by Vertica.. node_id | shutdown_epoch | backup_shutdown_epoch -------------------+----------------+----------------------45035996273704971 | | 960 (1 row) LOCKS Monitors lock grants and requests for all nodes. The backup-shutdown epoch number. The shutdown epoch number. The percent of the rows in the input file that have been sorted.. Notes If a COPY . Check your system CPU and disk accesses to determine if any activity is in progress before canceling the COPY or reporting a hang. PARSE_PERCENT_COMPLETE stays at 0 until the COPY operation has finished sorting. Example SELECT * FROM local_nodes. compressing and writing the data to disk. the ACCEPTED_ROW_COUNT field could increase up to the maximum number of rows in the input file as the rows are being parsed.

Run the Diagnostics Utility and contact Technical Support (on page 33). typically the query that caused the transaction's creation Lock mode describes the intended operations of the transaction: S — Share lock needed for select operations I — Insert lock needed for insert operations X — Exclusive lock is always needed for delete operations. global catalog. appearing in the output's OBJECT column. local catalog. Running a SELECT … from LOCKS can time out after five minutes. it gets one (1) line in the table. NODE_NAMES are separated by commas OBJECT_NAME VARCHAR Name of object being locked. Tuple Mover. the scope is listed as REQUESTED. are transient and are used only as part of normal query processing. can be a TABLE or an internal structure (projection. X lock is also the result of lock promotion (see Table 2) T — Tuple Mover lock used by the Tuple Mover and also used for COPY into pre-join projections Scope is the expected duration of the lock once it is granted. -442- . Unknown or deleted object. This situation occurs occurs when the cluster has failed. Before the lock is granted. epoch map) The unique OID assigned by the Vertica catalog of the object being locked ID of transaction and associated description. other than TRANSACTION.SQL Reference Manual Column Name NODE_NAMES Date Type VARCHAR Description The nodes on which lock interaction occurs. OBJECT_ID TRANSACTION_DESCRIPTI ON LOCK_MODE INTEGER VARCHAR VARCHAR LOCK_SCOPE VARCHAR Notes • • Locks acquired on tables that were subsequently dropped by another transaction can result in the message. Note on node rollup: If a transaction has the same lock in the same mode in the same scope on multiple nodes. the following scopes are possible: STATEMENT_LOCALPLAN STATEMENT_COMPILE STATEMENT_EXECUTE TRANSACTION_POSTCOMMIT TRANSACTION All scopes. Once a lock has been granted.

SQL System Tables (Monitoring APIs) The following two tables are from Transaction Processing: Concepts and Techniques http://www. If you have an S lock and you want an I lock.com/gp/product/1558601902/ref=s9sdps_c1_14_at1-rfc_p-frt_p-3237_g1_si1 ?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-1&pf_rd_r=1QHH6V589JEV0DR3DQ1D&pf_rd_t =101&pf_rd_p=463383351&pf_rd_i=507846 by Jim Gray (Figure 7. If you have an S lock and you want an S lock. Example 2: If someone has an I lock.amazon. 467). Table 2: Lock conversion matrix This table is used for upgrading locks you already have.6. For example. 408 and Figure 8. The table is symmetric. no lock requests is required. Granted Mode Requested Mode S S I X T Yes I No Yes No Yes X No No No No T Yes Yes No Yes No No Yes The following two examples refer to Table 1: • • Example 1: If someone else has an S lock. you request an X lock. Granted Mode Requested Mode S I X T S S I X I X I X X X X X T S I X T X X S The following table call shows that there are no current locks in use: SELECT * FROM LOCKS. p. you can get an I lock. p.11. you cannot get an I lock. node_names | object_name | object_id | transaction_description | lock_mode | lock_scope ------------+-------------+-----------+-------------------------+-----------+-----------(0 rows) See Also DUMP_LOCKTABLE (page 270) -443- . Table 1: Compatibility matrix for granular locks This table is for compatibility with other users.

SQL Reference Manual PROJECTION_REFRESHES (page 446) SESSION_PROFILES (page 457) NODE_RESOURCES Provides a snapshot of the node. The number of pages that have been modified since they were last written to disk. in pages. This is useful for regularly polling the node with automated tools or scripts.825473 node_name | site01 host_name | fc10-1. NODE_NAME HOST_NAME PROCESS_SIZE_BYTES PROCESS_RESIDENT_SET_S IZE_BYTES PROCESS_SHARED_MEMORY_ SIZE_BYTES PROCESS_TEXT_MEMORY_SI ZE_BYTES PROCESS_DATA_MEMORY_SI ZE_BYTES PROCESS_LIBRARY_MEMORY _SIZE_BYTES PROCESS_DIRTY_MEMORY_S IZE_BYTES VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Example Call the NODE_RESOURCES table: SELECT * FROM NODE_RESOURCES. used for performing processes.vertica. The total number of pages that the process has in memory. -[ RECORD 1 ]---------------------+--------------------------current_timestamp | 2009-09-01 18:17:24. The hostname associated with a particular node. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The total size of the program. This does not include any shared libraries.com process_size_bytes | 2665398272 process_resident_set_size_bytes | 44126208 process_shared_memory_size_bytes | 11071488 process_text_memory_size_bytes | 32907264 process_data_memory_size_bytes | 0 process_library_memory_size_bytes | 2531241984 process_dirty_memory_size_bytes | 0 -[ RECORD 2 ]---------------------+--------------------------- -444- . The total number of library pages that the process has in physical memory. The amount of physical memory. The total number of text pages that the process has in physical memory. The name of the node that is reporting the requested information. This does not include the executable code. The amount of shared memory used.

825051 node_name | site02 host_name | fc10-2. per ROS container.com process_size_bytes | 2529611776 process_resident_set_size_bytes | 26202112 process_shared_memory_size_bytes | 8208384 process_text_memory_size_bytes | 32907264 process_data_memory_size_bytes | 0 process_library_memory_size_bytes | 2395455488 process_dirty_memory_size_bytes | 0 PARTITIONS Displays partition metadata. one row per partition key.vertica.835606 node_name | site03 host_name | fc10-3.vertica.831238 node_name | site04 host_name | fc10-4.com process_size_bytes | 2530394112 process_resident_set_size_bytes | 27426816 process_shared_memory_size_bytes | 8208384 process_text_memory_size_bytes | 32907264 process_data_memory_size_bytes | 0 process_library_memory_size_bytes | 2396237824 process_dirty_memory_size_bytes | 0 -[ RECORD 3 ]---------------------+--------------------------current_timestamp | 2009-09-01 18:17:24.vertica.SQL System Tables (Monitoring APIs) current_timestamp | 2009-09-01 18:17:24.com process_size_bytes | 2529394688 process_resident_set_size_bytes | 26603520 process_shared_memory_size_bytes | 8220672 process_text_memory_size_bytes | 32907264 process_data_memory_size_bytes | 0 process_library_memory_size_bytes | 2395238400 process_dirty_memory_size_bytes | 0 -[ RECORD 4 ]---------------------+--------------------------current_timestamp | 2009-09-01 18:17:24. Column Name PARTITION_KEY TABLE_SCHEMA PROJECTION_NAME ROS_ID ROS_SIZE_BYTES ROS_ROW_COUNT NODE_NAME Data Type VARCHAR VARCHAR VARCHAR VARCHAR INTEGER INTEGER VARCHAR Description The partition value The name of the schema The projection name The object ID that uniquely references the ROS container The ROS container size in bytes Number of rows in the ROS container Node where the ROS container resides -445- .

45035986273705001. PARTITIONS displays information in a denormalized fashion. 100. PARTITIONS has six rows with the following values: (20. the refreshed projections go into a single ROS container. 'p1'. Column Name NODE_NAME PROJECTION_NAME -446- . 20000. 'p1'. If the table was created with a PARTITION BY clause.60) ROS_ID 45035986273705000 45035986273705001 45035986273705002 SIZE 1000 20000 30000 ROW_ROW_COUNT 100 200 300 NODE_NAME n1 n1 n1 In this example. RC2 and RC3. 300. since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container. 10000. (20. (40. 45035986273705000.30. then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers. Example Projection 'p1' has three ROS containers. 'p1'. 'p1'. (30. To find the number of partitions stored in a ROS container. 100. you aggregate PARTITIONS over the ros_id column. 300. 100. 45035986273705002. The name of the projection that is targeted for refresh. 'n1') 'n1') 'n1') 'n1') 'n1') 'n1') PROJECTION_REFRESHES Provides information about refresh operations for projections. Information about an unsuccessful refresh is maintained.40) (20) (30. 30000. Data Type VARCHAR VARCHAR Description The name of the node that is reporting the requested information. 45035986273705000. (30. 200. To find the number of ROS containers having data of a specific partition. 45035986273705000. you aggregate PARTITIONS over the partition_key column. 10000. until the projection is the target of another refresh operation. All refresh information for a node is lost when the node is shut down. with the values defined in the following table: COLUMN NAME RC1 RC2 RC3 ----------------+-------------------+-------------------+----------------PARTITION_KEY (20. (60. Information regarding refresh operations is maintained as follows: • • • • Information about a successful refresh is maintained until the refresh session is closed. 10000.SQL Reference Manual Notes • • • A many-to-many relationship exists between partitions and ROS containers. RC1. 30000. 45035986273705002. 'p1'. whether or not the refresh session is closed. 'p1'. After a refresh completes.

Scratch – Refreshes the projection without using a buddy. The number of times a refresh failed for the projection. To determine if a refresh has been blocked. To complete this phase. FAILURE_COUNT does not indicate whether or not the projection was eventually refreshed successfully. The LOCKS (page 441) system table is useful for determining if a refresh has been blocked on a table lock. Refreshed — Indicates that a refresh for a projection has successfully completed. This means that the projection cannot participate in historical queries from any point before the projection was refreshed. A unique ID that identifies the refresh session. If the table is locked by some other transaction. This method does not generate historical data. The time the projection refresh started (provided as a time stamp). Refreshing — Indicates that a refresh for a projection is in process. Note: This field is NULL until the projection starts to refresh. Current – Indicates that the refresh has reached the final phase and is attempting to refresh data from the current epoch. locate the term "refresh" in the transaction description. This enables the projection to be used for historical queries. This refresh phase requires the most amount of time. Indicates how far the refresh has progressed: Historical – Indicates that the refresh has reached the first phase and is refreshing data from historical data. A refresh has been blocked when the scope for the refresh is REQUESTED and one or more other transactions have acquired a lock on the table. The method used to refresh the projection: Buddy – Uses the contents of a buddy to refresh the projection. The length of time that the projection refresh ran in seconds. Failed — Indicates that a refresh for a projection did not successfully complete. REFRESH_PHASE VARCHAR REFRESH_METHOD VARCHAR REFRESH_FAILURE_COUN T INTEGER SESSION_ID REFRESH_START REFRESH_DURATION_SEC VARCHAR STRING INTEGER -447- . This method maintains historical data. refresh must be able to obtain a lock on the table.SQL System Tables (Monitoring APIs) ANCHOR_TABLE_NAME VARCHAR The name of the projection's anchor table. REFRESH_STATUS VARCHAR The status of the projection: Queued — Indicates that a projection is queued for refresh. See STATUS to determine how the refresh operation is progressing. refresh is put on hold until that transaction completes.

The name of the schema associated with the projection. The number of ROS rows in the projection. The number of rows in the table's projections. The associated table's schema name.SQL Reference Manual Example Call the PROJECTION_REFRESHES table: SELECT * FROM PROJECTION_REFRESHES. projection_name | total_row_count | ros_used_bytes | total_used_bytes -------------------------+-----------------+----------------+-----------------store_dimension_site04 | 250 | 10715 | 10715 store_dimension_site02 | 250 | 10715 | 10715 store_dimension_site03 | 250 | 10715 | 10715 store_dimension_site01 | 250 | 10715 | 10715 store_orders_fact_p1_b1 | 24827 | 636533 | 636533 store_orders_fact_p1 | 24827 | 636533 | 636533 store_orders_fact_p1 | 24888 | 637872 | 637872 store_orders_fact_p1_b1 | 24888 | 637872 | 637872 store_orders_fact_p1 | 25010 | 641137 | 641137 -448- . total_row_count. The associated table name. The name of the node that is reporting the requested information. The number of bytes of disk storage used by the projection. total_used_bytes FROM PROJECTION_STORAGE WHERE projection_schema = 'store' ORDER BY total_used_bytes. The name of the projection. ros_used_bytes. excluding any rows marked for deletion. The number of ROS bytes in the projection. PROJECTION_STORAGE Monitors the amount of disk storage used by each projection on each node. The number of columns in the projection. Column Name CURRENT_TIMESTAMP NODE_NAME PROJECTION_NAME PROJECTION_SCHEMA PROJECTION_COLUMN_CO UNT ROW_COUNT USED_BYTES WOS_ROW_COUNT WOS_USED_BYTES ROS_ROW_COUNT ROS_USED_BYTES ROS_COUNT ANCHOR_TABLE_NAME ANCHOR_TABLE_SCHEMA Data Type VARCHAR VARCHAR VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER VARCHAR VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The number of ROS containers in the projection. The number of WOS bytes in the projection. Example SELECT projection_name. The number of WOS rows in the projection.

The total number of user and system sessions. Containing the total number of queries executed. The number of active system sessions. -[ RECORD 1 ]---------------+--------------------------current_timestamp | 2009-08-11 15:40:58. The total number of active user and system sessions. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).292286 node_name | site01 active_user_session_count | 1 active_system_session_count | 2 total_user_session_count | 50 total_system_session_count | 45490 total_active_session_count | 3 -449- .SQL System Tables (Monitoring APIs) store_orders_fact_p1_b1 store_orders_fact_p1 store_orders_fact_p1_b1 store_sales_fact_p1_b1 store_sales_fact_p1 store_sales_fact_p1_b1 store_sales_fact_p1 store_sales_fact_p1 store_sales_fact_p1_b1 store_sales_fact_p1_b1 store_sales_fact_p1 (20 rows) | | | | | | | | | | | 25010 25275 25275 24827 24827 24888 24888 25010 25010 25275 25275 | | | | | | | | | | | 641137 647623 647623 946657 946657 949436 949436 952744 952744 962010 962010 | | | | | | | | | | | 641137 647623 647623 946657 946657 949436 949436 952744 952744 962010 962010 QUERY_METRICS Monitors the sessions and queries executing on each node. The total number of user sessions. The total number of system sessions. The name of the node that is reporting the requested information. The number of queries currently running. NODE_NAME ACTIVE_USER_SESSION_COUNT ACTIVE_SYSTEM_SESSION_COUN T TOTAL_USER_SESSION_COUNT TOTAL_SYSTEM_SESSION_COUNT TOTAL_ACTIVE_SESSION_COUNT TOTAL_SESSION_COUNT RUNNING_QUERY_COUNT EXECUTED_QUERY_COUNT VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Example => SELECT * FROM QUERY_METRICS. The number of active user sessions (connections).

Column Name NODE_NAME SESSION_ID Data Type VARCHAR VARCHAR Description The name of the node that is reporting the requested information. This identifier is unique within the cluster at any point in time but can be reused when the session closes.292833 node_name | site04 active_user_session_count | 1 active_system_session_count | 2 total_user_session_count | 50 total_system_session_count | 45490 total_active_session_count | 3 total_session_count | 3 running_query_count | 0 executed_query_count | 0 QUERY_PROFILES Provides information regarding executed queries.SQL Reference Manual total_session_count | 3 running_query_count | 1 executed_query_count | 76 -[ RECORD 2 ]---------------+--------------------------current_timestamp | 2009-08-11 15:40:58. otherwise NULL. see Profiling Database Performance. An identifier for the transaction within the session if any. TRANSACTION_ID INTEGER -450- .259699 node_name | site03 active_user_session_count | 1 active_system_session_count | 2 total_user_session_count | 50 total_system_session_count | 45491 total_active_session_count | 3 total_session_count | 3 running_query_count | 0 executed_query_count | 0 -[ RECORD 4 ]---------------+--------------------------current_timestamp | 2009-08-11 15:40:58.241425 node_name | site02 active_user_session_count | 1 active_system_session_count | 2 total_user_session_count | 50 total_system_session_count | 45491 total_active_session_count | 3 total_session_count | 3 running_query_count | 0 executed_query_count | 0 -[ RECORD 3 ]---------------+--------------------------current_timestamp | 2009-08-11 15:40:58. The identification of the session for which profiling information is captured. To obtain information about executed queries.

The projections used in the query. The number of rows returned by the query.504098 query_type | 1 error_code | 0 user_name | dbadmin processed_row_count | 26 -[ RECORD 23 ]------+------------------------------------------------------node_name | site01 session_id | fc10-1-16482:0x50f1 transaction_id | 45035996273812219 statement_id | 2 query | SELECT constraint_name. query_search_path | "$user". v_system projections_used | v_catalog. The epoch number at the start of the given query. The Linux system time of query execution in a format that can be used as a Date/Time Expression. ordinal_position. reference_table_name FROM foreign_keys ORDER BY 3. v_catalog. v_monitor. v_system projections_used | v_catalog. A list of schemas in which to look for tables. v_catalog. table_name. QUERY QUERY_SEARCH_PATH PROJECTIONS_USED QUERY_DURATION_US QUERY_START_EPOCH QUERY_START QUERY_TYPE ERROR_CODE USER_NAME PROCESSED_ROW_COUNT VARCHAR VARCHAR VARCHAR INTEGER VARCHAR VARCHAR VARCHAR INTEGER VARCHAR INTEGER Example Call the QUERY_PROFILES table: SELECT * FROM QUERY_PROFILES. or UTILITY The return error code for the query. query_search_path | "$user". reference_table_name FROM foreign_keys ORDER BY 4. NULL indicates that no statement is currently being processed. The name of the user who ran the query. UPDATE. public.. -[ RECORD 1 ]-------+------------------------------------------------------.SQL System Tables (Monitoring APIs) STATEMENT_ID INTEGER An ID for the currently executing statement..foreign_keys_p query_duration | 19841 query_start_epoch | 1416 query_start | 2009-08-11 12:23:27. DELETE.209952 query_type | 1 error_code | 0 -451- . v_monitor. -[ RECORD 22 ]------+------------------------------------------------------node_name | site01 session_id | fc10-1-16482:0x505e transaction_id | 45035996273812216 statement_id | 2 query | SELECT constraint_name. Is one of INSERT. SELECT. The query string used for the query. The duration of the query in microseconds. table_name.foreign_keys_p query_duration | 30836 query_start_epoch | 1416 query_start | 2009-08-11 12:22:16. public. ordinal_position.

foreign_keys_p query_duration | 19860 query_start_epoch | 1418 query_start | 2009-08-11 12:28:24.SQL Reference Manual user_name | dbadmin processed_row_count | 26 -[ RECORD 24 ]------+------------------------------------------------------node_name | site01 session_id | fc10-1-16482:0x535e transaction_id | 45035996273812233 statement_id | 2 query | SELECT * FROM foreign_keys. v_system projections_used | v_catalog.vs_grants_p query_duration | 19857 query_start_epoch | 1429 query_start | 2009-08-11 13:02:06.table_name from v_system. The time of first rejection for this requester type.privileges_description.grantor..870678 query_type | 1 error_code | 0 user_name | dbadmin -[ RECORD . v_catalog. query_search_path | "$user". NODE_NAME ACCUMULATION_START REQUEST_TYPE VARCHAR TIMESTAMP VARCHAR -452- . public. The requester type. v_system projections_used | v_system. v_monitor. query_search_path | "$user". Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). public. v_system projections_used | query_duration | -1 query_start_epoch | 1429 query_start | 2009-08-11 13:00:44.217356 query_type | 0 error_code | 16932996 user_name | dbadmin processed_row_count | 0 -[ RECORD 26 ]------+------------------------------------------------------node_name | site01 session_id | fc10-1-16482:0x63df transaction_id | 45035996273812325 statement_id | 2 query | SELECT grantee. table_schema. query_search_path | "$user". v_monitor. public.]------+------------------------------------------------------- RESOURCE_REJECTIONS Monitors requests for resources that are rejected by the resource manager. v_catalog. v_monitor.vs_grants.. The name of the node that is reporting the requested information.744115 query_type | 1 error_code | 0 user_name | dbadmin processed_row_count | 26 -[ RECORD 25 ]------+------------------------------------------------------node_name | site01 session_id | fc10-1-16482:0x6342 transaction_id | 45035996273812322 statement_id | 0 query | SELECT * FROM grants. v_catalog.

The total number of file handle type rejections.SQL System Tables (Monitoring APIs) REJECT_COUNT TIMEOUT_COUNT CANCEL_COUNT LAST_REQUEST_REJECTED_TYP E LAST_REQUEST_REJECTED_REA SON THREAD_REQUEST_COUNT FILE_HANDLE_REQUEST_COUNT MEMORY_REQUIREMENTS_BYTES ADDRESS_SPACE_REQUIREMENT S_BYTES INTEGER INTEGER INTEGER VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER The total number of rejections for this requester type. Requester Types • • Plans (see Plan Types) WOS Plan Types • • • • • • • • • • • • • • • • • • • • Load Query Load Query Direct Insert Query Insert Query Direct Delete Query Select Query TM_MOVEOUT TM_MERGEOUT TM_ANALYZE. The total number of memory type rejections. The last resource type rejected for this plan type. The total number of address space type rejections. The total number of cancellations for this requester type. The total number of thread type rejections. The total number of timeouts for this requester type. TM_DIRECTLOAD TM_REDELETE_MOVE TM_REDELETE_MERGE RECOVER ROS_SPLIT TM_DVWOS_MOVEOUT REDELETE_RECOVER REFRESH_HISTORICAL REFRESH_CURRENT ROS_SPLIT_REDELETE_1 ROS_SPLIT_REDELETE_2 -453- . The reason for the last rejection of this plan type.

The number of rows in the ROS. Column Name CURRENT_TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The cumulative number of local requests. The current number of active threads. The memory requested in kilobytes. The current request queue depth. The size of the ROS in bytes. The current number of open file handles. The size of the WOS in bytes. file handles. The number of rows in the WOS. and memory (in kilobytes). NODE_NAME REQUEST_COUNT LOCAL_REQUEST_COUNT REQUEST_QUEUE_DEPTH ACTIVE_THREAD_COUNT OPEN_FILE_HANDLE_COU NT MEMORY_REQUESTED_KB ADDRESS_SPACE_REQUES TED_KB WOS_USED_BYTES WOS_ROW_COUNT ROS_USED_BYTES ROS_ROW_COUNT VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER -454- . The name of the node that is reporting the requested information. RESOURCE_USAGE Monitors system resource management on each node. The address space requested in kilobytes.SQL Reference Manual Resource Types • • • • • • Number of Number of Number of Number of Number of Number of running plans running plans on initiator node (local) requested Threads requested File Handles requested KB of Memory requested KB of Address Space Reasons for Rejection • • • Usage of single request exceeds high limit Timed out waiting for resource reservation Canceled waiting for resource reservation Example Call the RESOURCE_REJECTIONS table: SELECT * FROM RESOURCE_REJECTIONS. The cumulative number of requests for threads.

-[ RECORD 1 ]-------------------+--------------------------current_timestamp | 2009-08-11 16:22:50. The total number of rows in storage (WOS + ROS). The number of rejections due to a failed volume. Example => SELECT * FROM RESOURCE_USAGE. For internal use only. The number of resource request timeouts. The number of resource request cancellations.SQL System Tables (Monitoring APIs) TOTAL_USED_BYTES TOTAL_ROW_COUNT RESOURCE_REQUEST_REJ ECT_COUNT RESOURCE_REQUEST_TIM EOUT_COUNT RESOURCE_REQUEST_CAN CEL_COUNT DISK_SPACE_REQUEST_R EJECT_COUNT FAILED_VOLUME_REJECT _COUNT TOKENS_USED TOKENS_AVAILABLE INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER The total size of storage (WOS + ROS) in bytes.005965 node_name | site02 request_count | 0 local_request_count | 0 request_queue_depth | 0 active_thread_count | 0 open_file_handle_count | 0 -455- . The number of rejected plan requests. The number of rejected disk write requests.005942 node_name | site01 request_count | 1 local_request_count | 1 request_queue_depth | 0 active_thread_count | 4 open_file_handle_count | 2 memory_requested_kb | 4352 address_space_requested_kb | 106752 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 10390319 ros_row_count | 324699 total_used_bytes | 10390319 total_row_count | 324699 resource_request_reject_count | 0 resource_request_timeout_count | 0 resource_request_cancel_count | 0 disk_space_request_reject_count | 0 failed_volume_reject_count | 0 tokens_used | 1 tokens_available | 7999999 -[ RECORD 2 ]-------------------+--------------------------current_timestamp | 2009-08-11 16:22:50. For internal use only.

005976 node_name | site03 request_count | 0 local_request_count | 0 request_queue_depth | 0 active_thread_count | 0 open_file_handle_count | 0 memory_requested_kb | 0 address_space_requested_kb | 0 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 10355231 ros_row_count | 324353 total_used_bytes | 10355231 total_row_count | 324353 resource_request_reject_count | 0 resource_request_timeout_count | 0 resource_request_cancel_count | 0 disk_space_request_reject_count | 0 failed_volume_reject_count | 0 tokens_used | 0 tokens_available | 8000000 -[ RECORD 4 ]-------------------+--------------------------current_timestamp | 2009-08-11 16:22:50.005986 node_name | site04 request_count | 0 local_request_count | 0 request_queue_depth | 0 active_thread_count | 0 open_file_handle_count | 0 memory_requested_kb | 0 address_space_requested_kb | 0 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 10385744 ros_row_count | 324870 total_used_bytes | 10385744 total_row_count | 324870 -456- .SQL Reference Manual memory_requested_kb | 0 address_space_requested_kb | 0 wos_used_bytes | 0 wos_row_count | 0 ros_used_bytes | 10359489 ros_row_count | 324182 total_used_bytes | 10359489 total_row_count | 324182 resource_request_reject_count | 0 resource_request_timeout_count | 0 resource_request_cancel_count | 0 disk_space_request_reject_count | 0 failed_volume_reject_count | 0 tokens_used | 0 tokens_available | 8000000 -[ RECORD 3 ]-------------------+--------------------------current_timestamp | 2009-08-11 16:22:50.

This identifier is unique within the cluster at any point in time but can be reused when the session closes. The number of unsuccessfully executed statements. The name used to log into the database or NULL if the session is internal. The date and time the user logged out of the database or when the internal session was closed. NULL if the session is internal. The number of times a lock timed out during the session. This can be useful for identifying sessions that have been left open for a period of time and could be idle. see Profiling Database Performance. The host name and port of the TCP socket from which the client connection was made.SQL System Tables (Monitoring APIs) resource_request_reject_count resource_request_timeout_count resource_request_cancel_count disk_space_request_reject_count failed_volume_reject_count tokens_used tokens_available | | | | | | | 0 0 0 0 0 0 8000000 SESSION_PROFILES Provides basic session parameters and lock time out data. Column Name CURRENT_TIMESTAMP NODE_NAME USER_NAME Data Type VARCHA R VARCHA R VARCHA R VARCHA R VARCHA R Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The identification of the session for which profiling information is captured. The number of deadlocks encountered during the session. The number of times a lock was cancelled during the session. The name of the node that is reporting the requested information. The number of locks granted during the session. The date and time the user logged into the database or when the internal session was created. The number of successfully executed statements. To obtain information about sessions. CLIENT_HOSTMANE LOGIN_TIMESTAMP LOGOUT_TIMESTAMP SESSION_ID VARCHA R VARCHA R INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER EXECUTED_STATEMENT_S UCCESS_COUNT EXECUTED_STATEMENT_F AILURE_COUNT LOCK_GRANT_COUNT DEADLOCK_COUNT LOCK_TIMEOUT_COUNT LOCK_CANCELLATION_CO UNT -457- .

A string containing the hexadecimal representation of the transaction ID. A description of the current transaction. The date and time the user logged into the database or when the internal session was created. Column Name CURRENT_TIMESTAM P NODE_NAME USER_NAME CLIENT_HOSTNAME LOGIN_TIMESTAMP SESSION_ID VARCHAR TRANSACTION_STAR T TRANSACTION_ID TRANSACTION_DESC RIPTION STATEMENT_START DATE VARCHAR VARCHAR DATE -458- . The identifier required to close or interrupt a session. Data Type VARCHAR VARCHAR VARCHAR VARCHAR DATE Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The name used to log into the database or NULL if the session is internal. The host name and port of the TCP socket from which the client connection was made. The date/time the current transaction started or NULL if no transaction is running. The name of the node that is reporting the requested information.SQL Reference Manual LOCK_REJECTION_COUNT LOCK_ERROR_COUNT INTEGER INTEGER The number of times a lock was rejected during a session. The number of lock errors encountered during the session. This can be useful for identifying sessions that have been left open for a period of time and could be idle. or NULL if no statement is running. The date/time the current statement started execution. Example Call the SESSION_PROFILES table: SELECT * FROM SESSION_PROFILES. if any. You can use this table to: • • • • Identify users who are running long queries Identify users who are holding locks due to an idle but uncommitted transaction Disconnect users in order to shut down the database Determine the details behind the type of database security (Secure Socket Layer (SSL) or client authentication) used for a particular session. otherwise NULL. This identifier is unique within the cluster at any point in time but can be reused when the session closes. NULL if the session is internal. See Also LOCKS (page 441) SESSIONS Monitors external sessions.

014816 transaction_id | 0xa000000001a440 transaction_description | user dbadmin (select * from node_resources. The type of client authentication used for a particular session.) -459- . During session initialization and termination.540641 node_name | site01 user_name | dbadmin client_hostname | 127. if known. AUTHENTICATION_M ETHOD VARCHAR Notes • • The superuser has unrestricted access to all session information. but users can only view information about their own. NULL indicates that no statement is currently being processed. The currently executing statement. so the client could authenticate the server. you might see sessions running only on nodes other than the node on which you executed the virtual table query.0. NULL otherwise.486908 session_id | fc10-1-16482:0x7586 transaction_start | 2009-08-11 14:23:15.SQL System Tables (Monitoring APIs) STATEMENT_ID LAST_STATEMENT_D URATION_US CURRENT_STATEMEN T SSL_STATE VARCHAR INTEGER VARCHAR VARCHAR An ID for the currently executing statement.0. if any. Example => SELECT * FROM SESSIONS. Server – Sever authentication was used. See Vertica Security and Implementing SSL. Mutual – Both the server and the client authenticated one another through mutual authentication. Possible values are: None – Vertica did not use SSL.1:36674 login_timestamp | 2009-08-11 13:37:59. The duration of the last completed statement in microseconds. This is a temporary situation and corrects itself as soon as session initialization and termination completes. Possible values are: Unknown Trust Reject Kerberos Password MD5 LDAP Kerberos-GSS See Vertica Security and Implementing Client Authentication. -[ RECORD 1 ]--------------+--------------------------------------------current_timestamp | 2009-08-11 16:32:16. Indicates if Vertica used Secure Socket Layer (SSL) for a particular session. current sessions.

The number of the end epoch.530551 42949673042 26856 SELECT * FROM SESSIONS. results from add column) Example SELECT schema_name. Column Name NODE_NAME SCHEMA_NAME PROJECTION_NAME STORAGE_TYPE STORAGE_OID TOTAL_ROW_COUNT DELETED_ROW_COUN T BYTES_USED START_EPOCH END_EPOCH GROUPING Data Type VARCHAR VARCHAR VARCHAR VARCHAR INTEGER VARCHAR INTEGER INTEGER VARCHAR VARCHAR VARCHAR Description The name of the node that is reporting the requested information. projection_name.SQL Reference Manual statement_start statement_id last_statement_duration_ms current_statement ssl_state authentication_method | | | | | | 2009-08-11 16:32:16. schema_name | projection_name | storage_type | grouping -------------+-----------------------------------------+--------------+------------public product_dimension_tmp_initiator ROS ALL public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_tmp_initiator ROS PROJECTION public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_tmp_initiator ROS PROJECTION public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_grouped_tmp_initiator ROS ALL public product_dimension_tmp_initiator ROS PROJECTION -460- . None Trust STORAGE_CONTAINERS Monitors information about each storage container in the database.. The number of the start epoch. grouping FROM storage_containers. The name of the schema.g. Type of storage container: ROS or WOS. despite grouping in the projection definition OTHER – Some grouping but neither all nor according to projection (e. Total rows in the projection. storage_type. Rows deleted from the projection. The name of the projection. The storage ID assigned by Vertica. The group by which columns are stored: ALL – All columns are grouped PROJECTION – Columns grouped according to projection definition NONE – No columns grouped. Size of the projection.

The total number of rows (WOS + ROS) (cluster-wide). The number of rows in ROS (cluster-wide). The ROS size in bytes (cluster-wide). Column Name CURRENT_TIMESTAMP CURRENT_EPOCH AHM_EPOCH LAST_GOOD_EPOCH REFRESH_EPOCH DESIGNED_FAULT_TOLER ANCE NODE_COUNT NODES_DOWN_COUNT CURRENT_FAULT_TOLERA NCE CATALOG_REVISION_NUM WOS_USED_BYTES WOS_ROW_COUNT ROS_USED_BYTES ROS_ROW_COUNT TOTAL_USED_BYTES TOTAL_ROW_COUNT Data Type VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). -[ RECORD 1 ]------------+--------------------------current_timestamp | 2009-08-11 17:09:54. The number of rows in WOS (cluster-wide). The catalog version number. The WOS size in bytes (cluster-wide).651413 current_epoch | 1512 ahm_epoch | 961 last_good_epoch | 1510 -461- . The total storage in bytes (WOS + ROS) (cluster-wide). The number of nodes in the cluster that are currently down. The ahm epoch number. The smallest (min) of all the checkpoint epochs on the cluster. The current epoch number. The number of node failures the cluster can tolerate before it shuts down automatically. The oldest of the refresh epochs of all the nodes in the cluster The designed or intended K-Safety level. The number of nodes in the cluster.SQL System Tables (Monitoring APIs) public public public public public public public public product_dimension_tmp_initiator product_dimension_tmp_initiator product_dimension_tmp_initiator product_dimension_tmp_initiator product_dimension_tmp_initiator product_dimension_grouped_tmp_initiator product_dimension_grouped_tmp_initiator product_dimension_grouped_tmp_initiator ROS ROS ROS ROS ROS ROS ROS ROS PROJECTION PROJECTION PROJECTION PROJECTION PROJECTION ALL ALL ALL SYSTEM Monitors the overall state of the database. Example Call the SYSTEM table: mydb=> SELECT * FROM SYSTEM.

One of the following operations: Moveout Mergeout Analyze Statistics Running or an empty string to indicate 'not running. (Not applicable for other operations. Column Name CURRENT_TIMESTAMP NODE_NAME OPERATION_NAME Data Type VARCHAR VARCHAR VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).) The last epoch of the mergeout operation (not applicable for other operations). The name of the node that is reporting the requested information.' The name of the projection being processed. No output means that the Tuple Mover is not performing an operation. The size in bytes of all ROS containers in the mergeout operation.) One of the following values: Moveout Mergeout Analyze Replay Delete OPERATION_STATUS PROJECTION_NAME OPERATION_START_EPOCH OPERATION_END_EPOCH ROS_COUNT TOTAL_ROS_USED_BYTES PLAN_TYPE VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER VARCHAR Example Call the TUPLE_MOVER_OPERATIONS table: SELECT * FROM TUPLE_MOVER_OPERATIONS. The number of ROS containers.SQL Reference Manual refresh_epoch designed_fault_tolerance node_count node_down_count current_fault_tolerance catalog_revision_number wos_used_bytes wos_row_count ros_used_bytes ros_row_count total_used_bytes total_row_count | | | | | | | | | | | | -1 1 4 0 1 1590 0 0 41490783 1298104 41490783 1298104 TUPLE_MOVER_OPERATIONS Monitors the status of the Tuple Mover on each node. -462- . The first epoch of the mergeout operation. (Not applicable for other operations.

775402 | node0001 | Running | p1 | Mergeout 2009-09-02 10:44:22.513913 | node0001 | Running | p1_b2 | Mergeout 2009-09-02 10:07:39. Either system or user data. Each region allocates blocks of a specific size to store rows. current_timestamp | node_name | operation_status | projection_name | plan_type ----------------------------+-----------+------------------+------------------+----------2009-09-02 10:02:33.689642 | node0001 | Running | p1_b2 | Mergeout 2009-09-02 10:02:34. Internal use only. The block size allocated by region in KB.755708 | node0001 | Running | p1_b2 | Replay Delete 2009-09-02 10:07:38.423701 | node0003 | Running | p1_b2 | Replay Delete 2009-09-02 10:44:21. plan_type FROM TUPLE_MOVER_OPERATIONS.773513 | node0002 | Running | p1 | Mergeout 2009-09-02 10:04:09.331516 | node0001 | Running | p1_b2 | Replay Delete 2009-09-02 10:27:31. projection_name.856147 | node0002 | Running | p1_b1 | Mergeout WOS_CONTAINER_STORAGE Monitors information about WOS storage. The name of the node that is reporting the requested information. -463- . Virtual size is greater than or equal to allocated size. operation_status. which is divided into regions. REGION_ALLOCATED_SIZ E REGION_IN_USE_SIZE REGION_SMALL_RELEASE _COUNT REGION_BIG_RELEASE_C OUNT INTEGER INTEGER INTEGER INTEGER Notes • The WOS allocator can use large amounts of virtual memory without assigning physical memory. Internal use only. The amount of physical memory in use by a particular region in KB.597861 | node0002 | Running | p1_b2 | Mergeout 2009-09-02 10:27:30.437047 | node0002 | Running | p1 | Mergeout 2009-09-02 10:27:31. The actual number of bytes of data stored by the region in KB. node_name. Column Name CURRENT_TIMESTAMP NODE_NAME WOS_TYPE WOS_ALLOCATION_REGIO N REGION_VIRTUAL_SIZE Data Type VARCHAR VARCHAR VARCHAR VARCHAR INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). which is greater than or equal to in-use size. The amount of virtual memory in use by region in KB.SQL System Tables (Monitoring APIs) The following statement returns only a few columns from TUPLE_MOVER_OPERATIONS: SELECT current_timestamp. The summary line sums the amount of memory used by all regions.

The summary line tells you the amount of memory used by the WOS. look at the REGION_IN_USE_SIZE column to see if the WOS is full. -[ RECORD 1 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05. Examples => SELECT * FROM WOS_CONTAINER_STORAGE.SQL Reference Manual • To see the difference between virtual size and allocated size.556018 node_name | site02 wos_type | v_system wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 5 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.556015 node_name | site02 wos_type | user wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 4 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.556012 node_name | site01 wos_type | v_system wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 3 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.556007 node_name | site01 wos_type | user wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 2 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05. which is typically capped at one quarter of physical memory per node.55602 node_name | site03 wos_type | user wos_allocation_region | Summary -464- .

SQL System Tables (Monitoring APIs) region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 6 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.556026 node_name | site04 wos_type | v_system wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -465- .556023 node_name | site03 wos_type | v_system wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 7 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.556025 node_name | site04 wos_type | user wos_allocation_region | Summary region_virtual_size | 0 region_allocated_size | 0 region_in_use_size | 0 region_small_release_count | 0 region_big_release_count | 0 -[ RECORD 8 ]--------------+--------------------------current_timestamp | 2009-08-11 18:02:05.

.

Deprecated System Tables The monitoring APIs listed in this section have been replaced by the tables listed in SQL System Tables (Monitoring APIs (page 409). For example: DROP public.sql Note: Run the backward compatibility script once only — either after you upgrade Vertica to a newer version (on your existing databases) or after your create a new database. For information about view names that the backward compatibility script creates. simply drop the view or views created by it.0 and prior. Enabling Deprecated Tables Manually 1 Upgrade Vertica or upgrade your drivers.sql. Note: For major releases. You do not need to run it each time you log in. Accessing Deprecated System Tables If you run monitoring tools or scripts that rely on system tables in Vertica 3. you have the option to let the system run the script for you: To undo the effects of the backward compatibility script.vt_partitions. You'll see a prompt similar to the following: dbname=> 3 Type the following command to run the script: => \i /opt/vertica/scripts/virtual_tables_backward_compatibility.0 functionality and grants SELECT permissions to all users. See Also DROP VIEW (page 364) in the SQL Reference Manual -467- . Enabling Deprecated using AdminTools Alternatively. See Upgrading Vertica in the Installation and Configuration Guide for details. you can access the deprecated tables for a limited time by using a backward compatibility script. This script creates views in the PUBLIC schema that are consistent with 3. the client must match the server. see /opt/vertica/scripts/virtual_tables_backward_compatibility. 2 Start your database and connect to it as the database owner. the first time you run AdminTools after you update your database.

These events are based on standard syslog severity types. The time is posted in military time. and time the event was reported. The severity of the event from highest to lowest. The event logging mechanisms that are configured for Vertica. VT_ACTIVE_EVENTS Note: This table is deprecated. Tracks the number of times an event occurs. day. A unique numeric ID that identifies the specific event. The node where the event occurred. Rather than posting the same event multiple times. and time the event expire. See Monitoring Events.SQL Reference Manual See also Using the SQL Monitoring API for more information. 0—Emergency 1—Alert 2—Critical 3—Error 4—Warning 5—Notice 6—Informational 7—Debug The year. and SNMP. month.log. Provides information about database events across the cluster. Use ACTIVE_EVENTS (page 424) instead. The time is posted in military time. A brief description of the event and details pertinent to the specific situation. the event is posted again. The year. month. (configured by default) syslog. Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. The name of the node within the cluster that reported the event. See Event Types for a list of event type codes. NODE_NAME EVENT_CODE EVENT_ID EVENT_SEVERITY VARCHAR INTEGER INTEGER VARCHAR EVENT_POSTED EVENT_EXPIRATION VARCHAR VARCHAR EVENT_CODE_DESCRIPTION PROBLEM_DESCRIPTION REPORTING_NODE EVENT_SENT_TO VARCHAR VARCHAR VARCHAR VARCHAR NUM_TIMES_POSTED INTEGER -468- . If the cause of the event is still active. These can include vertica. A generic description of the event. A numeric ID that indicates the type of event. day.

The number of ROS bytes in the column.SQL System Tables (Monitoring APIs) Vertica posts the event once and then counts the number of additional instances in which the event occurs. Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. The number of WOS rows in the column. You can use this table to find out the current session's sessionID and get the duration of the previously-run query. The number of ROS rows in the column. The disk storage allocation of the column in bytes. The name of the table NODE_NAME COLUMN_NAME NUM_ROWS NUM_BYTES WOS_ROWS ROS_ROWS ROS_BYTES NUM_ROS PROJECTION_NAME TABLE_NAME VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER VARCHAR VARCHAR VT_CURRENT_SESSION Note: This table is deprecated. Use CURRENT_SESSION (page 428) instead. The name used to log into the database or NULL if the -469- . Monitors the current active session. A projection column name. Column Name TIMESTAMP NODE_NAME USERNAME Data Type VARCHAR VARCHAR VARCHAR Description The Linux system time of query execution in a format that can be used as a DATE/TIME expression. The name of the node that is reporting the requested information. Returns the amount of disk storage used by each column of each projection on each node. Use COLUMN_STORAGE (page 426) instead. The number of rows in the column (cardinality). VT_COLUMN_STORAGE Note: This table is deprecated. The name of the node that is reporting the requested information. The number of ROS containers. The associated projection name.

Returns a value that indicates whether profiling is turned on. empty when no profiling. Results are: Empty when no profiling 'Local' when profiling on for this session 'Global' when on by default for all sessions 'Local. The identifier required to close or interrupt a session. This column can be useful for identifying sessions that have been left open and could be idle. Returns a value that indicates whether profiling is turned on. The date/time the current transaction started or NULL if no transaction is running. NULL indicates that no statement is currently being processed. otherwise the currently running statement or the most recently completed statement. LOGIN_TIME DATE SESSIONID VARCHAR TXN_START TXNID TXN_DESCRIPT STMT_START STMTID DATE VARCHAR VARCHAR DATE VARCHAR LAST_STMT_DURATI ON CURRENT_STMT LAST_STMT INTEGER VARCHAR VARCHAR EE_PROFILING_CON FIG VARCHAR QRY_PROFILING_CO NFIG VARCHAR SESSION_PROFILIN G_CONFIG VARCHAR -470- . The duration of the last completed statement in milliseconds. An ID for the currently executing statement. This identifier is unique within the cluster at any point in time but can be reused when the session closes. 'Local' when on for this session. A string containing the hexadecimal representation of the transaction ID. NULL if the session is internal The date and time the user logged into the database or when the internal session was created. A description of the current transaction. Global' when on by default for all sessions and on for current session. NULL otherwise NULL if the user has just logged in. 'Global' when on by default for sessions and 'Local. if any. The currently executing statement. Global' when on by default for all sessions and on for current session Returns a value that indicates whether profiling is turned on. empty when no profiling. if any.SQL Reference Manual session is internal CLIENT VARCHAR The host name and port of the TCP socket from which the client connection was made. The date/time the current statement started execution. 'Global' when on by default for sessions and 'Local. or NULL if no statement is running. Global' when on by default for all sessions and on for current session. 'Local' when on for this session. otherwise NULL.

enable_profiling ---------------------EE Profiling Enabled (1 row) Now when you run a select on the VT_CURRENT_SESSIONS table.SQL System Tables (Monitoring APIs) Notes • • The default for profiling is ON ('1') for all sessions. set_config_parameter ---------------------------Parameter set successfully (1 row) The following command tells you whether profiling is set to 'Local' or 'Global' or none: SELECT ee_profiling_config FROM vt_current_session. Profiling parameters (such as GlobalEEProfiling in the examples below) are set in the Vertica configuration file (vertica. set the parameter to '0'. To turn profiling off. disable_profiling ----------------------EE Profiling Disabled (1 row) The following command sets the GlobalEEProfiling configuration parameter to 0. ee_profiling_config --------------------Local (1 row) -471- . This command now enables EE profiling for query execution runs: SELECT enable_profiling('EE'). This command disables EE profiling for query execution runs: SELECT disable_profiling('EE'). which turns off profiling: SELECT set_config_parameter('GlobalEEProfiling'. Each session can turn profiling ON or OFF.conf). you can see profiling is ON for the local session: SELECT ee_profiling_config FROM vt_current_session. Examples The sequence of commands in this example shows the use of disabling and enabling profiling for local and global sessions. To turn profiling on. ee_profiling_config --------------------(1 row) Note: The result set is empty because profiling was turned off in the preceding example. set the parameter to '1'. '0').

The name of the node that is reporting the requested information. NODE_NAME ACCUMULATION_START REQUESTER DISK_SPACE_RJT FAILED_VOLUME_RJT VARCHAR VARCHAR VARCHAR INTEGER INTEGER VT_DISK_STORAGE Note: This table is deprecated. Global (1 row) See Also VT_EE_PROFILING (page 473). The time of first request rejection for this requester. The total number of disk space resource requests on a failed volume. -472- . and VT_SESSION_PROFILING (page 490) VT_DISK_RESOURCE_REJECTIONS Note: This table is deprecated. The resource request requester (example: plan type). Use DISK_STORAGE (page 432) instead. as well as for all sessions: SELECT ee_profiling_config FROM vt_current_session. Monitors requests for resources that are rejected due to disk space shortages. Use DISK_RESOURCE_REJECTIONS (page 431) instead. ee_profiling_config --------------------Local. you can see profiling is ON for the local sessions.SQL Reference Manual Now turn profiling on for all sessions by setting the GlobalEEProfiling configuration parameter to 1: SELECT set_config_parameter('GlobalEEProfiling'. Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). set_config_parameter ---------------------------Parameter set successfully (1 row) Now when you run a select on the VT_CURRENT_SESSIONS table. '1'). VT_QUERY_PROFILING (page 485). The total number of disk space resource requests rejected.

The identification of the session for which profiling information is captured. The number of megabytes of free storage available. The block size of the disk. The percentage of free disk space remaining. An identifier for the transaction within the session if any. An ID for the currently executing statement. NODE_NAME DISK_BLK_SIZE USED_BLKS MB_USED FREE_BLKS MB_FREE PERCENTAGE_FREE VT_EE_PROFILING Note: This table is deprecated. To obtain information about query execution runs for your database. The number of megabytes of disk storage in use. The number of free disk blocks available. see Profiling Database Performance. The number of disk blocks in use. NULL indicates that no statement is currently being processed. The name of the node that is reporting the requested information. Provides information regarding query execution runs.SQL System Tables (Monitoring APIs) Monitors the amount of disk storage used by the database on each node. Column Name TIMESTAMP NODE_NAME SESSIONID Data Type VARCHA R VARCHA R VARCHA R Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. Use EXECUTION_ENGINE_PROFILES (page 436) instead. Column Name TIMESTAMP Date Type VARCHA R VARCHA R INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). otherwise NULL. The name of the node that is reporting the requested information. TXNID STMTID INTEGER INTEGER -473- . This identifier is unique within the cluster at any point in time but can be reused when the session closes.

The number of rows produced by the EE operator. The number of bytes received over the network for the query execution. Use GRANTS (page 413) instead. Column Name GRANTEE_ID GRANTEE GRANTOR_ID GRANTOR PRIVILEGES Data Type INTEGER VARCHA R INTEGER VARCHA R INTEGER Description The grantee object ID (OID) from the catalog The user being granted permission The object ID from the catalog The user granting permission The bitmask representation of the privileges being granted -474- .SQL Reference Manual OPERATOR_NAME OPERATORID COUNTER_NAME COUNTER_VALUE VARCHA R INTEGER VARCHA R INTEGER The name of the user who started the session. The ID of the user who started the session. The time required to execute the query (in milliseconds). The name of the counter. COUNTER_NAME Values The value of COUNTER_NAME can be any of the following: COUNTER_NAME bytes sent bytes received rows produced executable time (ms) Description The number of bytes sent over the network for the query execution. The value of the counter. See COUNTER_NAME Values below. VT_GRANT Note: This table is deprecated. Provides grant information.

SELECT. REFERENCES | schema2 | events user1 | dbadmin | INSERT. REFERENCES | schema1 | events user1 | dbadmin | SELECT | schema1 | events user2 | dbadmin | INSERT. SELECT | schema2 | events (4 rows) The vsql command \dp schemaname. SELECT. SELECT. REFERENCES | schema1 | events user1 | dbadmin | SELECT | schema1 | events (2 rows) Call the VT_GRANT table: SELECT * FROM vt_grant. The object ID from the catalog Name of the schema Name of the table Notes The vsql commands \dp and \z both include the schema name in the output: $ \dp Access privileges for database "dbadmin" Grantee | Grantor | Privileges | Schema | Name ---------+---------+--------------------------------------------+---------+-------user2 | dbadmin | INSERT.events. UPDATE.* Access privileges for database "dbadmin" Grantee | Grantor | Privileges | Schema | Name ---------+---------+--------------------------------------------+---------+-------user2 | dbadmin | INSERT.* displays all tables in the named schema: $ \dp schema1. Access privileges for database "dbadmin" Grantee | Grantor | Privileges | Schema | Name ---------+---------+--------------------------------------------+---------+-------user2 | dbadmin | INSERT. SELECT. for example INSERT. UPDATE. DELETE. DELETE. This command lets you distinguish the grants for same-named tables in different schemas: $ \dp *. UPDATE. UPDATE. REFERENCES | schema2 | events user1 | dbadmin | INSERT. DELETE. DELETE. UPDATE. REFERENCES | schema1 | events user1 | dbadmin | SELECT | schema1 | events user2 | dbadmin | INSERT. SELECT. -475- . SELECT | schema2 | events (4 rows) Access privileges for database "dbadmin" Grantee | Grantor | Privileges | Schema | Name ---------+---------+------------+--------+-------| dbadmin | USAGE | | public (1 row) The vsql command \dp *. SELECT.tablename displays table names in all schemas.SQL System Tables (Monitoring APIs) PRIVILEGES_DESC OBJECTID SCHEMANAME TABLENAME VARCHA R INTEGER VARCHA R VARCHA R A readable description of the privileges being granted. DELETE.

The name of the node that is reporting the requested information. The name of the table being loaded. The Linux system time when the load started. Column Name TIMESTAMP Date Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. if specified. The optional identifier that names a stream. The size of the input file in bytes. If using STDIN. The number of bytes read from the input file.SQL Reference Manual VT_LOAD_STREAMS Note: This table is deprecated. The number of rows loaded. Monitors the locks in use. Example => \pset expanded Call the VT_LOAD_STREAMS table: SELECT * FROM VT_LOAD_STREAMS. NODE_NAME STREAM TABLE_NAME LOAD_START_TIMESTAMP ROWS_LOADED ROWS_REJECTED BYTES_READ INPUT_FILE_SIZE VARCHAR VARCHAR VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER Note: When using STDIN as input size of input file size is zero (0). Use LOAD_STREAMS (page 440) instead. -476- . Use LOCKS (page 441) instead. The number of rows rejected. VT_LOCK Note: This table is deprecated. Monitors load metrics for each load stream on each node. PERCENT_COMPLETE INTEGER The percent of the rows in the input file that have been loaded. this column remains at zero (0) until the COPY statement is complete.

OBJECT VARCHAR Name of object being locked. Once a lock has been granted. Lock mode describes the intended operations of the transaction: S — Share lock needed for select operations. can be a TABLE or an internal structure (projection. the scope is listed as REQUESTED. This situation occurs occurs when the cluster has failed. are transient and are used only as part of normal query processing. it gets one (1) line in the table. T — Tuple Mover lock used by the Tuple Mover and also used for COPY into pre-join projections. Run the Diagnostics Utility and contact Technical Support (on page 33). Scope is the expected duration of the lock once it is granted. Tuple Mover. typically the query that caused the transaction's creation. X — Exclusive lock is always needed for delete operations. ID of transaction and associated description. epoch map). NODE_NAMES are separated by commas. OID TRANSACTION MODE INTEGER VARCHAR VARCHAR SCOPE VARCHAR Notes • • Locks acquired on tables that were subsequently dropped by another transaction can result in the message. I — Insert lock needed for insert operations. Note on node rollup: If a transaction has the same lock in the same mode in the same scope on multiple nodes. Object ID of of object being locked. Unknown or deleted object. the following scopes are possible: STATEMENT_LOCALPLAN STATEMENT_COMPILE STATEMENT_EXECUTE TRANSACTION_POSTCOMMIT TRANSACTION All scopes.SQL System Tables (Monitoring APIs) Column Name NODE_NAMES Date Type VARCHAR Description The nodes on which lock interaction occurs. X lock is also the result of lock promotion (see Table 2). Running a SELECT … from VT_LOCK can time out after five minutes. local catalog. Before the lock is granted. other than TRANSACTION. -477- . appearing in the output's OBJECT column. global catalog.

p. 467). Table 2: Lock conversion matrix This table is used for upgrading locks you already have. no lock requests is required. Granted Mode Requested Mode S S I X T Yes I No Yes No Yes X No No No No T Yes Yes No Yes No No Yes The following two examples refer to Table 1: • • Example 1: If someone else has an S lock.6. 408 and Figure 8.11.com/gp/product/1558601902/ref=s9sdps_c1_14_at1-rfc_p-frt_p-3237_g1_si1 ?pf_rd_m=ATVPDKIKX0DER&pf_rd_s=center-1&pf_rd_r=1QHH6V589JEV0DR3DQ1D&pf_rd_t =101&pf_rd_p=463383351&pf_rd_i=507846 by Jim Gray (Figure 7. you can get an I lock. you request an X lock. The table is symmetric. you cannot get an I lock. p. Table 1: Compatibility matrix for granular locks This table is for compatibility with other users.SQL Reference Manual The following two tables are from Transaction Processing: Concepts and Techniques http://www.amazon. For example. If you have an S lock and you want an I lock. Example 2: If someone has an I lock. If you have an S lock and you want an S lock. Granted Mode Requested Mode S I X T See Also DUMP_LOCKTABLE (page 270) S S I X I X I X X X X X T S I X T X X S -478- .

A description of the processor. The total number of other file descriptions open in which other could be a directory or FIFO. in kilobytes. in kilobytes. This is useful for regularly polling the node with automated tools or scripts. in kilobytes.10GHz (1 row) The total number of open files on the node. The number of system processors. The number of processor cores in the system. The total number of open sockets on the node. The total amount of physical RAM. The amount of physical RAM. Provides a snapshot of the node. Use HOST_RESOURCES (page 437) OR NODE_RESOURCES (page 444) instead. It is not an open file or socket. The name of the node that is reporting the requested information. used as cache memory on the NODE_NAME NUM_OPEN_FILES_LIMIT NUM_THREADS_LIMIT MAX_CORE_FILE_SIZE_LIMIT NUM_PROCESSORS NUM_PROCESSOR_CORES PROCESSOR_DESCRIPTION NUM_OPENED_FILES NUM_OPENED_SOCKETS NUM_NONFILE_NONSOCKET_OPENED TOTAL_MEMORY TOTAL_MEMORY_FREE TOTAL_BUFFER_MEMORY INTEGER INTEGER INTEGER TOTAL_MEMORY_CACHE INTEGER -479- . The maximum core file size allowed on the node. For example: Inter(R) Core(TM)2 Duo CPU T8100 @2. left unused by the system. The maximum number of threads that can coexist on the node.SQL System Tables (Monitoring APIs) VT_NODE_INFO Note: This table is deprecated. used for file buffers on the system The amount of physical RAM. in kilobytes. Column Name TIMESTAMP Data Type VARCHA R VARCHA R INTEGER INTEGER INTEGER INTEGER INTEGER VARCHA R INTEGER INTEGER INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. The maximum number of files that can be open at one time on the node. The amount of physical RAM. available on the system.

*The amount of physical memory. per ROS container. VT_PARTITIONS Note: This table is deprecated. on the system. Use PARTITIONS (page 445) instead. The total amount of swap memory free. *The number of pages that have been modified since they were last written to disk. *The total number of library pages that the process has in physical memory.SQL Reference Manual system. PROCESS_DATA_MEMORY_SIZE INTEGER PROCESS_LIBRARY_MEMORY_SIZE PROCESS_DIRTY_MEMORY_SIZE INTEGER INTEGER MB_FREE_DISK_SPACE INTEGER MB_USED_DISK_SPACE MB_TOTAL_DISK_SPACE INTEGER INTEGER * Each page is 4096 bytes in size. *The total size of the program (in pages). The total free disk space available (in megabytes) for all storage location file systems. This does not include the executable code. Displays partition metadata. The disk space used (in megabytes) for all storage location file systems. on the system. in kilobytes. The free disk space available (in megabytes) for all storage location file systems (data directories). This does not include any shared libraries. in kilobytes. -480- . *The amount of shared memory used (in pages). one row per partition key. used for performing processes. TOTAL_SWAP_MEMORY TOTAL_SWAP_MEMORY_FREE PROCESS_SIZE PROCESS_RESIDENT_SET_SIZE PROCESS_SHARED_MEMORY_SIZE PROCESS_TEXT_MEMORY_SIZE INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER The total amount of swap memory available. *The total number of pages that the process has in memory. *The total number of text pages that the process has in physical memory. in pages.

'p1'. RC2 and RC3. 'p1'. 'e1') 'e1') 'e1') 'e1') 'e1') (60.SQL System Tables (Monitoring APIs) Column Name PARTITION_KEY SCHEMANAME PROJNAME ROSID SIZE ROWCOUNT LOCATION Data Type VARCHAR VARCHAR VARCHAR VARCHAR INTEGER INTEGER VARCHAR Description The partition value The name of the schema The projection name The object ID that uniquely references the ROS container The ROS container size in bytes Number of rows in the ROS container Site where the ROS container resides Notes • • • A many-to-many relationship exists between partitions and ROS containers. This meta data table provides information regarding projections. 45035986273705001. 300. 45035986273705000. 'p1'. VT_PARTITIONS has six rows with the following values: (20. 100. 'p1'. Use PROJECTIONS (page 416) instead. 300. 200. -481- . 45035986273705000. 30000. To find the number of ROS containers having data of a specific partition. you aggregate VT_PARTITIONS over the partition_key column. Example Projection 'p1' has three ROS containers. 10000. you aggregate VT_PARTITIONS over the ros_id column. 45035986273705002. (30. (20. 'p1'. 30000. RC1. (40. with the values defined in the following table: RC1 RC2 RC3 ----------------+-------------------+-------------------+----------------PARTITION_KEY (20. 10000.30. (30. 45035986273705000. 45035986273705002. 'p1'.60) ROS_ID 45035986273705000 45035986273705001 45035986273705002 SIZE 1000 20000 30000 ROWCOUNT 100 200 300 LOCATION e1 e1 e1 In this example. 'e1') VT_PROJECTION Note: This table is deprecated. To find the number of partitions stored in a ROS container. VT_PARTITION displays information in a denormalized fashion. 100.40) (20) (30. 100. 10000. 20000.

A unique numeric ID (OID) that identifies the owner of the projection. The name of the projection's owner. whether or not the refresh session is closed. Use PROJECTION_REFRESHES (page 446) instead. Information about an unsuccessful refresh is maintained. K-safety value for the projection. The name of the schema that contains the projection.SQL Reference Manual Column Name SCHEMAID SCHEMANAME PROJID PROJNAME OWNERID OWNERNAME ANCHORTABLEID Data Type OID VARCHAR OID VARCHAR OID VARCHAR OID Description A unique numeric ID (OID) that identifies the specific schema that contains the projection. that contain the projection. Information regarding refresh operations is maintained as follows: • • • Information about a successful refresh is maintained until the refresh session is closed. The epoch in which the projection was created. for pre-join projections. or the OID of the table from which the projection was created if it isn't a pre-join projection. until the projection is the target of another refresh operation. The name of the anchor table. Projections must be up-to-date to be used in queries. The name of the projection. Indicates whether or not the projection is a pre-join projection where t is true and f is false. The name of the node. A unique numeric ID (OID) that identifies the projection. that contain the projection. for pre-join projections. or the name of the table from which the projection was created if it isn't a pre-join projection. or nodes. A unique numeric ID (OID) that identifies the node. Indicates whether or not the projection is current where t is true and f is false. ANCHORTABLENAME VARCHAR SITEID SITENAME PREJOIN CREATEEPOCH VERIFIEDK UPTODATE OID VARCHAR BOOLEAN OID INTEGER BOOLEAN VT_PROJECTION_REFRESH Note: This table is deprecated. The unique numeric identification (OID) of the anchor table. -482- . All refresh information for a node is lost when the node is shut down. or nodes. Provides information about refresh operations for projections.

locate the term "refresh" in the transaction description.SQL System Tables (Monitoring APIs) • After a refresh completes. This refresh phase requires the most amount of time. failed--Indicates that a refresh for a projection did not successfully complete. The name of the projection's anchor table. Column Name NODE_NAME PROJECTION_NAME TABLE_NAME STATUS VARCHAR The status of the projection: queued--Indicates that a projection is queued for refresh. Data Type VARCHAR VARCHAR VARCHAR Description The name of the node upon which the refresh operation is running or ran. If the table is locked by some other transaction. since queries on projections with multiple ROS containers perform better than queries on projections with a single ROS container. refresh is put on hold until that transaction completes. The name of the projection that is targeted for refresh. refreshed--Indicates that a refresh for a projection has successfully completed. This means that the projection cannot participate in historical queries from any point before the projection was refreshed. the refreshed projections go into a single ROS container. The method used to refresh the projection: buddy--Uses the contents of a buddy to refresh the projection. If the table was created with a PARTITION BY clause. scratch--Refreshes the projection without using a buddy. This enables the projection to be used for historical queries. This method maintains historical data. refresh must be able to obtain a lock on the table. A refresh has been blocked when the scope for the refresh is REQUESTED and one or more other transactions have acquired a lock on the table. refreshing--Indicates that a refresh for a projection is in process. current--Indicates that the refresh has reached the final phase and is attempting to refresh data from the current epoch. PHASE VARCHAR METHOD VARCHAR -483- . then you should call PARTITION_TABLE() or PARTITION_PROJECTION() to reorganize the data into multiple ROS containers. Note: This field is null until the projection starts to refresh. This method does not generate historical data. Indicates how far the refresh has progressed: historical--Indicates that the refresh has reached the first phase and is refreshing data from historical data. To complete this phase. The LOCKs (page 441) system table is useful for determining if a refresh has been blocked on a table lock. To determine if a refresh has been blocked.

The associated table name. Monitors the amount of disk storage used by each projection on each node. The number of ROS rows in the projection. SESSION_ID START_TIME DURATION VARCHAR STRING INTEGER VT_PROJECTION_STORAGE Note: This table is deprecated. A unique ID that identifies the refresh session. The number of ROS bytes in the projection. Use QUERY_METRICS (page 449) instead.SQL Reference Manual FAILURE_COUNT INTEGER The number of times a refresh failed for the projection. The number of rows in the projection. The number of bytes of disk storage used by the projection. The number of WOS rows in the projection. The length of time that the projection refresh ran in seconds. The number of WOS bytes in the projection. Monitors the sessions and queries executing on each node. The number of ROS containers in the projection. Use PROJECTION_STORAGE (page 448) instead. The name of the projection. The number of columns in the projection. FAILURE_COUNT does not indicate whether or not the projection was eventually refreshed successfully. See STATUS to determine how the refresh operation is progressing. The name of the node that is reporting the requested information. VT_QUERY_METRICS Note: This table is deprecated. Column Name TIMESTAMP NODE_NAME PROJECTION_NAME NUM_COLUMNS NUM_ROWS NUM_BYTES WOS_ROWS WOS_BYTES ROS_ROWS ROS_BYTES NUM_ROS TABLE_NAME Data Type VARCHAR VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The time the projection refresh started (provided as a time stamp). -484- .

Use QUERY_PROFILES (page 450) instead.SQL System Tables (Monitoring APIs) Column Name TIMESTAMP Data Type VARCHA R VARCHA R INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The identification of the session for which profiling information is captured. The number of active user sessions (connections). This identifier is unique within the cluster at any point in time but can be reused when the session closes. Provides information regarding executed queries. To obtain information about executed queries. The total number of user sessions. otherwise NULL. NULL indicates that no statement is currently being processed. The query string used for the query. Column Name NODE_NAME SESSIONID Data Type VARCHA R VARCHA R INTEGER INTEGER VARCHA R VARCHA R VARCHA R Description The name of the node that is reporting the requested information. The projections used in the query. see Profiling Database Performance. The total number of active user and system sessions. The total number of user and system sessions. Containing the total number of queries executed. An identifier for the transaction within the session if any. The name of the node that is reporting the requested information. TXNID STMTID QUERY QRYSEARCHPATH PROJECTIONS -485- . The number of queries currently running. NODE_NAME ACTIVE_USER_SESSIONS ACTIVE_SYS_SESSIONS TOTAL_USER_SESSIONS TOTAL_SYS_SESSIONS TOTAL_ACTIVE_SESSIONS TOTAL_SESSIONS QUERIES_CURRENTLY_RUNNING TOTAL_QUERIES_EXECUTED VT_QUERY_PROFILING Note: This table is deprecated. An ID for the currently executing statement. The total number of system sessions. The number of active system sessions. A list of schemas in which to look for tables.

The total number of cancellations for this requester type. The total number of rejections for this requester type. The time of first rejection for this requester type. The requester type. The name of the node that is reporting the requested information. VT_RESOURCE_REJECTIONS Note: This table is deprecated. The total number of memory type rejections. Use RESOURCE_REJECTIONS (page 452) instead. Monitors requests for resources that are rejected by the resource manager. The total number of address space type rejections. The reason for the last rejection of this plan type. NODE_NAME ACCUMULATION_START REQUESTER REJECTED TIMEDOUT CANCELED LAST_RQT_RJTD LAST_REASON THREAD_RQTS FILEHANDLE_RQTS MEM_RQTS ADDR_SPACE_RQTS VARCHAR TIMESTAMP VARCHAR INTEGER INTEGER INTEGER VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER Requester Types • • Plans (see Plan Types) WOS Plan Types • • Load Query Load Query Direct -486- . The Linux system time of query execution in a format that can be used as a Date/Time Expression. The total number of file handle type rejections. Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The total number of timeouts for this requester type. The total number of thread type rejections.SQL Reference Manual DURATION TIMESTAMP INTEGER VARCHA R The duration of the query in microseconds. The last resource type rejected for this plan type.

Monitors system resource management on each node. Column Name TIMESTAMP NODE_NAME Data Type VARCHAR VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).SQL System Tables (Monitoring APIs) • • • • • • • • • • • • • • • • • • Insert Query Insert Query Direct Delete Query Select Query TM_MOVEOUT TM_MERGEOUT TM_ANALYZE. TM_DIRECTLOAD TM_REDELETE_MOVE TM_REDELETE_MERGE RECOVER ROS_SPLIT TM_DVWOS_MOVEOUT REDELETE_RECOVER REFRESH_HISTORICAL REFRESH_CURRENT ROS_SPLIT_REDELETE_1 ROS_SPLIT_REDELETE_2 Resource Types • • • • • • Number of Number of Number of Number of Number of Number of running plans running plans on initiator node (local) requested Threads requested File Handles requested KB of Memory requested KB of Address Space Reasons for Rejection • • • Usage of single request exceeds high limit Timed out waiting for resource reservation Canceled waiting for resource reservation VT_RESOURCE_USAGE Note: This table is deprecated. The name of the node that is reporting the requested -487- . Use RESOURCE_USAGE (page 454) instead.

For internal use only. The name of the schema. The number of resource request timeouts. The number of rejections due to a failed volume. The size of the WOS in bytes. VT_SCHEMA Provides information about the database schema. and memory (in kilobytes).SQL Reference Manual information. file handles. For internal use only. schemaid | schemaname | ownerid | ownername -488- . The memory requested in kilobytes. The current number of open file handles. REQUESTS LOCAL_REQUESTS REQ_QUE_DEPTH ACTIVE_THREADS OPEN_FILE_HANDLES KB_MEM_RQTD KB_ADDR_SPACE_RQTD WOS_BYTES WOS_ROWS ROS_BYTES ROS_ROWS BYTES ROWS RSC_RQT_REJECTED RSC_RQT_TIMEOUTS RSC_RQT_CANCELLED DISK_SPACE_RQT_RJTD FAILED_VOL_RJTD TOKENS_USED TOKENS_AVAILABLE INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER The cumulative number of requests for threads. The number of resource request cancellations. Column Name SCHEMAID SCHEMANAME OWNERID OWNERNAME Data Type INTEGER VARCHAR INTEGER VARCHAR Description The schema ID from the catalog. The current request queue depth. The number of rows in the ROS. The current number of active threads. The name of the user who created the schema. The cumulative number of local requests. The number of rows in the WOS. The address space requested in kilobytes. The size of the ROS in bytes. The number of rejected disk write requests. The number of rejected plan requests. The owner ID from the catalog. Example SELECT * FROM vt_schema. The total number of rows in storage (WOS + ROS). The total size of storage (WOS + ROS) in bytes.

NULL if the session is internal The date and time the user logged into the database or when the internal session was created. The name of the node that is reporting the requested information. Monitors external sessions. NULL indicates that no statement is currently being processed. This identifier is unique within the cluster at any point in time but can be reused when the session closes. A description of the current transaction The date/time the current statement started execution.SQL System Tables (Monitoring APIs) -------------------+--------------+-------------------+----------45035996273704963 | public | 45035996273704961 | release 45035996273704977 | store | 45035996273704961 | release 45035996273704978 | online_sales | 45035996273704961 | release (3 rows) VT_SESSION Note: This table is deprecated. A string containing the hexadecimal representation of the transaction ID. Column Name TIMESTAMP NODE_NAME USERNAME CLIENT LOGIN_TIME Data Type VARCHAR VARCHAR VARCHAR VARCHAR DATE Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). An ID for the currently executing statement. SESSIONID VARCHAR TXN_START TXNID TXN_DESCRIPTION STMT_START STMTID LAST_STMT_DURATI ON DATE VARCHAR VARCHAR DATE VARCHAR INTEGER -489- . The name used to log into the database or NULL if the session is internal The host name and port of the TCP socket from which the client connection was made. otherwise NULL. if any. Use SESSIONS (page 458) instead. You can use this table to: • • • • Identify users who are running long queries Identify users who are holding locks due to an idle but uncommitted transaction Disconnect users in order to shut down the database Determine the details behind the type of database security (Secure Socket Layer (SSL) or client authentication) used for a particular session. The duration of the last completed statement in milliseconds. This can be useful for identifying sessions that have been left open for a period of time and could be idle. The identifier required to close or interrupt a session. or NULL if no statement is running. The date/time the current transaction started or NULL if no transaction is running.

Mutual – Both the server and the client authenticated one another through mutual authentication. AUTHENTICATION_M ETHOD VARCHAR Notes • • The superuser has unrestricted access to all session information. Server – Sever authentication was used. so the client could authenticate the server. if any. See Vertica Security and Implementing SSL. Use SESSION_PROFILES (page 457) instead. if known. Column Name TIMESTAMP NODE_NAME Data Type VARCHA R VARCHA R Description The Linux system time of query execution in a format that can be used as a Date/Time Expression. Possible values are: None – Vertica did not use SSL. but users can only view information about their own. During session initialization and termination. see Profiling Database Performance. NULL if the user has just logged in.SQL Reference Manual CURRENT_STMT LAST_STATEMENT SSL_STATE VARCHAR VARCHAR VARCHAR The currently executing statement. The name of the node that is reporting the requested information. To obtain information about sessions. VT_SESSION_PROFILING Note: This table is deprecated. Provides basic session parameters and lock time out data. you might see sessions running only on nodes other than the node on which you executed the virtual table query. Indicates if Vertica used Secure Socket Layer (SSL) for a particular session. Possible values are: Unknown Trust Reject Kerberos Password MD5 LDAP Kerberos-GSS See Vertica Security and Implementing Client Authentication. -490- . NULL otherwise. current sessions. The type of client authentication used for a particular session. otherwise the currently running statement or the most recently completed statement. This is a temporary situation and corrects itself as soon as session initialization and termination completes.

-491- . LOGOUT_TIME VARCHA R VARCHA R INTEGE R INTEGE R INTEGE R INTEGE R INTEGE R INTEGE R INTEGE R INTEGE R SESSIONID NUM_STMTSUCESSFULLYEX ECUTED NUM_STMTUNSUCCESSFULL YEXECUTED NUM_LOCKGRANTS NUM_DEADLOCKS NUM_LOCKTIMEOUTS NUM_LOCKCANCELED NUM_LOCKREJECTIONS NUM_LOCKERRORS VT_SYSTEM Note: This table is deprecated. NULL if the session is internal. The number of times a lock timed out during the session.SQL System Tables (Monitoring APIs) USERNAME CLIENT LOGIN_TIME VARCHA R VARCHA R VARCHA R The name used to log into the database or NULL if the session is internal. Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The number of deadlocks encountered during the session. The number of unsuccessfully executed statements. The host name and port of the TCP socket from which the client connection was made. The number of lock errors encountered during the session. This identifier is unique within the cluster at any point in time but can be reused when the session closes. Monitors the overall state of the database. The number of locks granted during the session. The number of times a lock was rejected during a session. The date and time the user logged out of the database or when the internal session was closed. The number of successfully executed statements. The identification of the session for which profiling information is captured. The date and time the user logged into the database or when the internal session was created. Use SYSTEM (page 461) instead. This can be useful for identifying sessions that have been left open for a period of time and could be idle. The number of times a lock was cancelled during the session.

Is 'f' for user-created tables. The owner ID from the catalog. VT_TABLE Note: This table is deprecated. The oldest of the refresh epochs of all the nodes in the cluster The designed or intended K-Safety level. The total number of rows (WOS + ROS) (cluster-wide). The total storage in bytes (WOS + ROS) (cluster-wide). The name of the user who created the table.SQL Reference Manual CURRENT_EPOCH AHM_EPOCH LAST_GOOD_EPOCH REFRESH_EPOCH DESIGNED_FAULT_TOLER ANCE NUM_NODES NUM_NODES_DOWN CURRENT_FAULT_TOLERA NCE CATALOG_REV_NUM WOS_BYTES WOS_ROWS ROS_BYTES ROS_ROWS BYTES ROWS INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER INTEGER The current epoch number. The name of the table. The number of node failures the cluster can tolerate before it shuts down automatically. The number of rows in WOS (cluster-wide). Column Name SCHEMAID SCHEMANAME TABLEID TABLENAME OWNERID OWNERNAME SYSTABLE Data Type INTEGER VARCHAR INTEGER VARCHAR INTEGER VARCHAR BOOLEAN Description The schema ID from the catalog. Use TABLES (page 419) instead. The name of the schema. The number of nodes in the cluster that are currently down. The number of rows in ROS (cluster-wide). 't' for Vertica system tables" Example The following command returns information on all tables in the vmart schema: -492- . The WOS size in bytes (cluster-wide). The ROS size in bytes (cluster-wide). The table ID from the catalog. The catalog version number. The ahm epoch number. The number of nodes in the cluster. Provides information about all tables in the database. The smallest (min) of all the checkpoint epochs on the cluster.

The number of bytes used to store the projections. The number of projections using columns of the table. Column Name TIMESTAMP NODE_NAME TABLE_NAME NUM_PROJECTIONS NUM_COLUMNS NUM_ROWS NUM_BYTES Data Type VARCHA R VARCHA R VARCHA R INTEGER INTEGER INTEGER INTEGER Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76).SQL System Tables (Monitoring APIs) SELECT * FROM VT_TABLE. The number of rows in the table (cardinality). One of the following operations: Moveout Mergeout -493- . The name of the node that is reporting the requested information. VT_TABLE_STORAGE Note: This table is deprecated. Monitors the status of the Tuple Mover on each node. The number of columns in the table. VT_TUPLE_MOVER Note: This table is deprecated. Use TUPLE_MOVER_OPERATIONS (page 462) instead. Monitors the amount of disk storage used by each table on each node. Column Name TIMESTAMP NODE_NAME OPERATION Data Type VARCHAR VARCHAR VARCHAR Description The Linux system time of query execution in a format that can be used as a Date/Time Expression (page 76). The table name. The name of the node that is reporting the requested information.

Each region allocates blocks of a specific size to store rows.) One of the following values: Moveout Mergeout Analyze Replay Delete Notes No output from VT_TUPLE_MOVER means that the Tuple Mover is not performing an operation.) The last epoch of the mergeout operation (not applicable for other operations). Column Name TIMESTAMP Data Type VARCHAR Description The Linux system time of query execution in a format that can -494- . VT_WOS_STORAGE Note: This table is deprecated. The number of ROS containers. Monitors information about WOS storage. The name of the user who created the view.SQL Reference Manual Analyze Statistics STATUS PROJ START_EPOCH END_EPOCH NUM_MINI_ROS SIZE PLAN VARCHAR VARCHAR INTEGER INTEGER INTEGER INTEGER VARCHAR Running or an empty string to indicate 'not running. The query used to define the view. (Not applicable for other operations. (Not applicable for other operations. VT_VIEW Note: This table is deprecated. See Using Views for more information. The name of the view. Provides information about all views within the system. which is divided into regions. The first epoch of the mergeout operation. Column Name SCHEMANAME VIEWNAME OWNERNAME DEFINITION Data Type VARCHAR VARCHAR VARCHAR VARCHAR Description The name of the schema that contains the view.' The name of the projection being processed. The size in bytes of all ROS containers in the mergeout operation. See TUPLE_MOVER_OPERATIONS (page 462) for active example. Use VIEWS (page 422) instead. Use WOS_CONTAINER_STORAGE (page 463) instead.

virtual_size. The amount of physical memory in use by a particular region in KB.VT_WOS_STORAGE. which is typically capped at one quarter of physical memory per node. Examples SELECT node_name. Virtual size is greater than or equal to allocated size. look at the IN_USE_SIZE column to see if the WOS is full. The actual number of bytes of data stored by the region in KB. NODE_NAME TYPE ALLOCATOR_REGION VIRTUAL_SIZE VARCHAR VARCHAR VARCHAR INTEGER The node where the WOS data is stored. The block size allocated by region in KB. The summary line sums the amount of memory used by all regions. ALLOCATED_SIZE IN_USE_SIZE INTEGER INTEGER Notes • • The WOS allocator can use large amounts of virtual memory without assigning physical memory. allocator_region. The amount of virtual memory in use by region in KB. node_name | type | allocator_region | virtual_size | allocated_size | in_use_size -----------+--------+------------------+--------------+----------------+------------site01 | user | Summary | 0 | 0 | 0 site01 | system | 16 KB Region | 102400 | 16 | 0 site01 | system | Summary | 102400 | 16 | 0 site02 | user | Summary | 0 | 0 | 0 site02 | system | Summary | 0 | 0 | 0 site03 | user | Summary | 0 | 0 | 0 site03 | system | Summary | 0 | 0 | 0 site04 | user | Summary | 0 | 0 | 0 site04 | system | Summary | 0 | 0 | 0 (9 rows) -495- . type. which is greater than or equal to in-use size. allocated_size. in_use_size FROM MONITORING. To see the difference between virtual size and allocated size.SQL System Tables (Monitoring APIs) be used as a DATE/TIME expression. The summary line tells you the amount of memory used by the WOS. Either system or user data.

.

347. 388. 92. 202. 113 BIT_LENGTH • 200. 239. 230 CANCEL_DEPLOYMENT • 252. 201. 198. 154. 124. 201. 388 Aggregate Functions • 72. 80. 259. 198 Column length limits fixed length • 49 variable length • 49 Column References • 74. 353. 346. 93. 320. 356 Binary Operators • 62. 392 Binary Data Types • 66. 234 CBRT • 179 CEILING (CEIL) • 179 CHAR • 94 Character Data Types • 66. 92. 91. 105. 93. 80. 427 ANALYZE_STATISTICS • 251. 197. 203 ASIN • 178 ATAN • 178 ATAN2 • 178 AVG • 112 B BETWEEN-predicate • 79. 265. 93. 125. 348. 389 ANALYZE_CONSTRAINTS • 245. 196. 109. 271 AGE • 142 Aggregate Expressions • 72. 388 COPY • 46. 392 Comments • 75 COMMIT • 46. 433. 72. 89. 73. 214. 114 BIT_XOR • 91. 106. 313 ADD_MONTHS • 141 ADVANCE_EPOCH • 244. 354. 228 BITSTRING_TO_BINARY • 90. 81 Compound key • 323 Concurrent connections per cluster • 49 per node • 49 CONFIGURE_DEPLOYMENT • 258 Constants • 55. 348. 296. 223. 203. 94. 297 CLEAR_QUERY_REPOSITORY • 254 CLIENT_ENCODING • 204 CLOCK_TIMESTAMP • 143. 80. 166 CLOSE_ALL_SESSIONS • 254 CLOSE_SESSION • 256 COALESCE • 193. 122. 392 Boolean-predicate • 65. 283. 351. 335. 353. 71. 66. 435. 185. 392 BTRIM • 202. 92. 203 CLEAR_DESIGN_SEGMENTATION_TABLE • 253. 300. 311 CLEAR_DESIGN_TABLES • 243. 92. 66. 291. 216. 281. 83. 324. 112. 147. 161. 250. 353. 353. 327. 354 column-definition • 320. 92. 299. 114. 123. 349 ALTER TABLE • 250. 377. 170. 346. 365 -497- . 340. 197. 253. 318 ALTER SCHEMA • 318. 116.C Index A About the Documentation • 35 ABS • 177 ACOS • 177 ACTIVE_EVENTS • 418. 115 BITCOUNT • 92. 194. 121. 394 COLUMN_STORAGE • 269. 398 CAST • 51. 89. 269. 195. 418. 357. 195. 92 BIT_AND • 91. 228 Boolean Data Type • 65. 478 ADD_DESIGN_TABLES • 242. 353 COLUMNS • 418. 214. 115. 394 ALTER PROJECTION • 244. 90. 296 AND operator • 65 ASCII • 199. 167. 479 column-constraint • 320. 287. 282. 254 ADD_LOCATION • 243. 54. 419 column-value-predicate • 81. 112. 204. 388. 302 CANCEL_REFRESH • 252 CASE Expressions • 65. 380 COS • 180 COT • 180 COUNT • 116 COUNT(*) • 119 CREATE PROJECTION • 341. 112. 120. 129. 182. 245. 194. 125. 267. 278. 78. 89. 220. 217 BIT_OR • 91. 56. 153. 335 Comparison Operators • 65. 356 CHARACTER_LENGTH • 200. 326. 356 Boolean Operators • 65. 359 ALTER USER • 325 ALTER_LOCATION_USE • 244 Analytic Functions • 125. 217 CHR • 199. 271.

144 CURRENT_TIMESTAMP • 78. 440. 144 CURRENT_SCHEMA • 237 CURRENT_SESSION • 418. 154. 167 F FALSE • 65 FIRST_VALUE / LAST_VALUE • 125. 445. 271. 467. 277 EXPORT_DESIGN_TABLES • 276. 483 DISPLAY_LICENSE • 267 DO_TM_TASK • 268. 301 CREATE_DESIGN_CONFIGURATION • 260 CREATE_DESIGN_CONTEXT • 260 CREATE_DESIGN_QUERIES_TABLE • 261. 165 CURRENT_USER • 238. 364. 153. 264. 153. 275. 274. 177. 410. 482. 302. 365 DROP TABLE • 352. 365. 145. 488 DUMP_PARTITION_KEYS • 269. 440. 293. 359. 296 -498- DISK_RESOURCE_REJECTIONS • 418. 462. 66. 344 EVENT_CONFIGURATIONS • 418. 479 CURRENT_TIME • 78. 353. 148. 356 Date/Time Expressions • 58. 435. 269. 503. 105. 294. 452. 349. 293. 411 DROP USER • 369 DROP VIEW • 361. 359. 447. 270. 348 CREATE TABLE • 250. 96. 274. 483. 294. 441. 470. 293. 403 Day of the Week Names • 59 DECODE • 204 DEGREES • 181 DELETE • 45. 437. 128. 96. 477 DROP_LOCATION • 269. 164.SQL Reference Manual CREATE SCHEMA • 319. 318 DOUBLE PRECISION • 56 DOUBLE PRECISION (FLOAT) • 69. 159. 458. 275. 463. 272. 293. 294. 440. 497. 250. 445. 274. 87 EXTRACT • 145. 309 CREATE_PROJECTION_DESIGN • 262 CURRENT_DATABASE • 237 CURRENT_DATE • 78. 126. 89. 140 Date/Time Operators • 68 DATE_PART • 145 DATE_TRUNC • 147 DATEDIFF • 148 DATESTYLE • 96. 274. 471. 403 Date/Time Data Types • 66. 444 EXECUTION_ENGINE_PROFILES • 418. 437. 453. 356 CREATE USER • 360 CREATE VIEW • 361 CREATE_DESIGN • 259. 363. 91. 457. 186 DROP PROJECTION • 288. 294. 349. 259. 352 DUMP_PROJECTION_PARTITION_KEYS • 269. 377. 367. 153. 275. 352 DUMP_CATALOG • 273 DUMP_LOCKTABLE • 274. 411. 499. 300. 240 D Data Type Coercion Operators (CAST) • 56. 352 DUMP_TABLE_PARTITION_KEYS • 269. 365. 379. 76. 404 CREATE TEMPORARY TABLE • 352. 502. 356. 265. 272. 279. 316 Deprecated System Tables • 477 Depth of nesting subqueries • 49 DISABLE_DUPLICATE_KEY_ERROR • 247. 155. 420 Formatting Functions • 167 . 275. 165 FLOOR • 181 FOREIGN_KEYS • 418. 277 EXPORT_STATISTICS • 278. 362. 92 Database size • 49 DATE • 59. 142. 265. 472. 415 DEPLOY_DESIGN • 252. 239. 433. 306. 369. 144. 368. 466. 441. 108. 351. 483 EXP • 181 EXPLAIN • 371 EXPORT_CATALOG • 275 EXPORT_DESIGN_CONFIGURATION • 276. 298. 294. 495. 449. 482 DISK_STORAGE • 418. 271. 504 Date/Time Functions • 78. 353 E encoding-type • 341. 287. 496. 274. 275. 359. 293. 299 DROP_PARTITION • 269. 352. 494. 358. 161. 368 DROP SCHEMA • 319. 401. 271. 296 Expressions • 71. 149 Date/Time Constants • 58.

161 GETUTCDATE • 154 GRANT (Schema) • 376. 456. 449. 341. 343. 309 LOAD_STREAMS • 334. 397 HEX_TO_BINARY • 90. 392 G GET_AHM_EPOCH • 278 GET_AHM_TIME • 278 GET_CURRENT_EPOCH • 279 GET_DESIGN_SCRIPT • 263. 392 LIMIT Clause • 388. 391. 401 Limits Basic names • 49 Columns per table • 49 Concurrent connections per cluster • 49 Connections per node. 156 LOCALTIMESTAMP • 78. 389. 217 Length for a fixed-length column • 49 LIKE-predicate • 84. 390. 388. 203. 346. 91. 119. 296 LOAD_DESIGN_QUERIES • 261. 218 Keywords and Reserved Words • 51 L LAST_DAY • 155 LCOPY • 327. 415 J joined-table • 390. 213. 376. 446. 101. 108 INTERRUPT_STATEMENT • 253. 82. 347 hash-segmentation-clause • 341. 264. 406 ISFINITE • 155 ISNULL • 194. 343. 450. 394 K Key size • 49 Keywords • 51. 487. 389. 342. 149 Interval Values • 61. 400. 383 GRANT (View) • 362. 235 INET_NTOA • 92. 359 INET_ATON • 92. 207 HOST_RESOURCES • 418. 489 I Identifiers • 55. 391 join-predicate • 83. 86. 467. 486 LOCAL_NODES • 419. 388. 419. 213 GROUP BY Clause • 78. 83. 493 LOG • 183 LOWER • 214 -499- H HAS_TABLE_PRIVILEGE • 238 HASH • 182. 200. GET_TABLE_PROJECTIONS • 282. 378 GRANTS • 418. 287. 103. 92. 279 GET_LAST_GOOD_EPOCH • 280 GET_NUM_ACCEPTED_ROWS • 280 GET_NUM_REJECTED_ROWS • 280 GET_PROJECTION_STATUS • 281. 283 INTERVAL • 96. 283. 484 GREATEST • 205. 380 LEAD / LAG • 126. 207. 388. 392 INSERT • 379 INSTR • 210 INTEGER • 56. 185. 450 LOCALTIME • 78. 94. 107. 365 GETDATE • 154. number • 49 Database size • 49 Depth of nesting subqueries • 49 Fixed-length column • 49 Key size • 49 Projections per database • 49 Row size • 49 Rows per load • 49 Table size • 49 Tables per database • 49 Variable-length column • 49 LN • 183 LOAD_DATA_STATISTICS • 286. 102. 209. 112.Index FROM Clause • 81. 211 LEFT • 213 Length Basic names • 49 Fixed-length column • 49 Length for a variable-length column • 49 LENGTH • 92. 421. 365 GET_PROJECTIONS. 377. 208 INITCAP • 209 IN-predicate • 82. 218 IMPLEMENT_TEMP_DESIGN • 283. 197 ISO 8601 • 58 . 131 LEAST • 207. 157 LOCKS • 419. 346 HAVING Clause • 112. 382 GRANT (Table) • 376.

223. 267. 61 Numeric Data Types • 66. 275. 304. 261. 400. 401 Operators • 62 OR operator • 65 ORDER BY Clause • 112. 78 NULL Operators • 70 NULL Value • 78. 365 Mathematical Functions • 110. 312. 86. 272. 306 MARK_DESIGN_KSAFE • 288. 314. 295 M MAKE_AHM_NOW • 288. 419. 353 PARTITIONS • 419. 260. 295 Reading the Online Documentation • 35 REENABLE_DUPLICATE_KEY_ERROR • 247. 495 QUOTE_IDENT • 218 QUOTE_LITERAL • 219 Quoted identifiers • 55 R RADIANS • 186 RANDOM • 186 RANDOMINT • 187 range-segmentation-clause • 341. 203. 353 PARTITION_TABLE • 272. 388. 388 MAX • 91. 184. 392 Number of columns per table • 49 Number of connections per node • 49 Number of rows per load • 49 NUMERIC • 107 Numeric Constants • 56. 296. 271. 78 NVL • 194. 314. 196 NVL2 • 197 Q QUERY_METRICS • 419. 347 RANK / DENSE_RANK • 126. 400. 265. 313. 92. 293. 289. 215. 314. 294. 293. 453. 346. 491 Projections per database • 49 PST • 58 PURGE • 294. 386. 489 NOW • 77. 291 MIN • 91. 275. 494 PROJECTIONS • 313. 353. 313 MERGE_PARTITIONS • 272. 292.SQL Reference Manual LPAD • 215 LTRIM • 202. 303 PURGE_PROJECTION • 294 PURGE_TABLE • 294. 454. 120 MOD • 184 MODULARHASH • 182. 398. 387 REMOVE_DEPLOYMENT_ENTRY • 297 REMOVE_DESIGN • 254. 347 Month Names • 60 MONTHS_BETWEEN • 157 Multi-column key • 323 N NaN • 56 NODE_RESOURCES • 348. 214. 225 POWER • 185 Predicates • 79 Preface • 43 PRIMARY_KEYS • 418. 81. 86. 419. 92. 177 Mathematical Operators • 68. 274. 87 NULL-handling Functions • 193 NULLIF • 195 NULL-predicate • 80. 135 READ_DATA_STATISTICS • 278. 457. 342. 292. 459. 423 Printing Full Books • 36 PROJECTION_REFRESHES • 313. 195. 494 QUERY_PROFILES • 419. 492 PROJECTION_STORAGE • 269. 296 RELEASE SAVEPOINT • 47. 389. 380. 453. 287. 401 OVERLAPS • 159 . 424. 356 Numeric Expressions • 56. 440. 490 PI • 185 POSITION • 218. 297 REMOVE_DESIGN_CONTEXT • 254. 297 -500- O OCTET_LENGTH • 200. 159 NULL • 65. 120 MD5 • 216 MEASURE_LOCATION_PERFORMANCE • 290. 216 OFFSET Clause • 388. 230 OVERLAY • 217 P Pacific Standard Time • 58 PARTITION_PROJECTION • 269. 458. 419. 455. 418. 103. 419. 274. 183.

138 RPAD • 222 RTRIM • 202. 160. 239 String Functions • 199 STRPOS • 218. 419. 304. 265. 123 SYNC_CURRENT_DESIGN • 314. 225 SUBSTR • 213. 401. 383 RIGHT • 222 ROLLBACK • 46. 306 SET_AHM_TIME • 288. 381. 124. 335. 305 SET_DESIGN_KSAFETY • 306 SET_DESIGN_LOG_FILE • 307 SET_DESIGN_LOG_LEVEL • 307 SET_DESIGN_PARAMETER • 308 SET_DESIGN_QUERIES_TABLE • 261. 223. 122 SUM_FLOAT • 107. 122. 499 SET • 401 SET_AHM_EPOCH • 288. 166 Statistical analysis • 121. 47. 298.Index REPEAT • 92. 311 SET_DESIGN_TABLE_ROWS • 242. 304. 300. 388 String Constants (Dollar-Quoted) • 57 String Constants (Standard) • 57. 36 SUM • 107. 287. 299. 440. 417. 463. 431 T Table size • 49 TABLE_CONSTRAINTS • 418. 225. 362. 385 ROLLBACK TO SAVEPOINT • 47. 300. 381. 414 SELECT CURRENT_SCHEMA • 302 SESSION CHARACTERISTICS • 46. 501 System Information Functions • 237 System Limits • 49 SYSTEM_TABLES • 418. 300 RETIRE_LOCATION • 243. 299 REVERT_DEPLOYMENT • 252. 363. 265. 385. 323 -501- . 426 table-constraint • 320. 469 String Concatenation Operators • 70. 379. 271. 467. 243. 291. 387 ROUND • 187 Row size • 49 ROW_NUMBER • 126. 500 SESSION_USER • 238. 301. 303. 220 REPLACE • 221 Reserved Words • 54 RESET_DESIGN_QUERIES_TABLE • 261. 405 SESSION_PROFILES • 419. 404 SELECT • 112. 245. 222. 256. 316 S SAVE_DESIGN_VERSION • 302 SAVE_QUERY_REPOSITORY • 302 SAVEPOINT • 47. 409 SHOW SEARCH_PATH • 409 SIGN • 189 SIN • 189 SPLIT_PART • 223 SQL Data Types • 89. 125 STDDEV • 121 STDDEV_POP • 121 STDDEV_SAMP • 121. 388 SQL Language Elements • 51 SQL Overview • 45 SQL Statements • 317 SQL System Tables (Monitoring APIs) • 290. 230 RUN_DEPLOYMENT • 252. 226 Suggested Reading Paths • 35. 379. 298. 122 STORAGE_CONTAINERS • 419. 477 SQRT • 189 START_REFRESH • 253. 466. 497 RESTORE_LOCATION • 298. 216. 406. 470. 461. 303. 419. 388. 386 Search Conditions • 86 SEARCH_PATH • 349. 310 SET_DESIGN_SEGMENTATION_TABLE • 242. 239 SESSIONS • 254. 302 REVOKE (Schema) • 382 REVOKE (Table) • 383 REVOKE (View) • 362. 258. 270. 361. 259. 343 STATEMENT_TIMESTAMP • 143. 308 SET_DESIGN_QUERY_CLUSTER_LEVEL • 309 SET_DESIGN_SEGMENTATION_COLUMN • 253. 161 SYSTEM • 269. 290. 321. 313. 227 SUBSTRING • 92. 243. 453. 316 SYSDATE • 154. 496 RESOURCE_USAGE • 419. 259. 309 RESOURCE_REJECTIONS • 419. 340 SQL Functions • 111. 311 SET_LOCATION_PERFORMANCE • 312 SHOW • 403.

415 UPDATE_DESIGN • 259. 273. 233. 165 TRANSLATE • 229 TRIM • 202. 486 VT_LOCK • 487 VT_NODE_INFO • 489 VT_PARTITIONS • 490 VT_PROJECTION • 272. 279. 305. 419 V_MONITOR Schema • 417. 504 VT_ACTIVE_EVENTS • 478 VT_COLUMN_STORAGE • 272. 223. 265. 281. 407 Time Zone Values • 58 TIME_SLICE • 131. 315 UPPER • 230 USER • 238. 148. 35. 274. 171. 364. 479 VT_CURRENT_SESSION • 479 VT_DISK_RESOURCE_REJECTIONS • 482 VT_DISK_STORAGE • 483 VT_EE_PROFILING • 482. 228 TO_NUMBER • 171 TO_TIMESTAMP • 170 TRANSACTION_TIMESTAMP • 143. 391 table-reference • 390 TABLES • 418. 488 TEMP_DESIGN_SCRIPT • 314 Template Pattern Modifiers for Date/Time Formatting • 170. 231. 410 TUPLE_MOVER_OPERATIONS • 419. 99. 97. 451. 92. 430 VIEWS • 418. 301. 499 VT_SESSION_PROFILING • 482. 363. 172. 431. 503. 170. 471. 330 Template Patterns for Date/Time Formatting • 140. 428 Typographical Conventions • 40 USERS • 418. 175 TIME • 59. 177. 125 VARBINARY • 89 VARIANCE • 125 VERSION • 240 Vertica Functions • 45. 491 VT_PROJECTION_REFRESH • 253. 330 Template Patterns for Numeric Formatting • 167. 483 VT_GRANT • 484 VT_LOAD_STREAMS • 280. 502 Tables per database • 49 TAN • 190 Technical Support • 33. 227 TO_CHAR • 167 TO_DATE • 169 TO_HEX • 91. 172. 371. 234 V6_TYPE • 92. 242 VIEW_COLUMNS • 418. 234 V6_SUBNETN • 92. 495 VT_RESOURCE_REJECTIONS • 496 VT_RESOURCE_USAGE • 497 VT_SCHEMA • 498 VT_SESSION • 253. 216. 101 TIME ZONE • 401. 500 VT_SYSTEM • 501 VT_TABLE • 502 VT_TABLE_STORAGE • 503 VT_TUPLE_MOVER • 503 VT_VIEW • 504 VT_WOS_STORAGE • 505 U UNION • 388. 406 Time Zone Names for Setting TIME ZONE • 406. 427. 234 VAR_POP • 124 VAR_SAMP • 124. 233. 368. 100. 377. 314. 232 V6_SUBNETA • 92. 239. 149 TIME AT TIME ZONE • 97. 161 TIMEOFDAY • 165 TIMESTAMP • 99. 149. 233 V6_NTOA • 92. 260. 411 Unquoted identifiers • 55 UPDATE • 45. 169. 504 TYPES • 418. 190 TRUNCATE TABLE • 358. 240 -502- W WAIT_DEPLOYMENT • 316 . 170. 306 TO_BITSTRING • 92. 429 UTC • 58 V V_CATALOG Schema • 417. 160. 161. 229 TRUE • 65 TRUNC • 147. 432 V6_ATON • 92. 175.SQL Reference Manual table-primary • 390. 172. 207. 167. 492 VT_PROJECTION_STORAGE • 494 VT_QUERY_METRICS • 494 VT_QUERY_PROFILING • 482. 169.

Index WHERE Clause • 363. 505 Z Zulu • 58 -503- . 415 Where to Find Additional Information • 39 Where to Find the Vertica Documentation • 35 WIDTH_BUCKET • 191 WOS_CONTAINER_STORAGE • 419. 392. 389. 388. 472. 397. 377.