sql

Podatność SQL injection w oprogramowaniu MegaBIP

W oprogramowaniu MegaBIP wykryto kolejną podatność typu SQL Injection i nadano jej identyfikator CVE-2024-6160.




sql

Podatność SQL injection w oprogramowaniu MegaBIP

W oprogramowaniu MegaBIP wykryto kolejną podatność typu SQL Injection i nadano jej identyfikator CVE-2024-6527.




sql

SQL*Plus error logging – New feature release 11.1

articles: 

One of the most important things that a developer does apart from just code development is, debugging. Isn’t it? Yes, debugging the code to fix the errors that are raised. But, in order to actually debug, we need to first capture them somewhere. As of now, any application has it’s own user defined error logging table(s).

Imagine, if the tool is rich enough to automatically capture the errors. It is very much possible now with the new SQL*PLus release 11.1

A lot of times developers complain that they do not have privilege to create tables and thus they cannot log the errors in a user defined error logging table. In such cases, it’s a really helpful feature, at least during the unit testing of the code.

I made a small demonstration in SCOTT schema using the default error log table SPERRORLOG, hope this step by step demo helps to understand easily :

NOTE : SQL*Plus error logging is set OFF by default. So, you need to “set errorlogging on” to use the SPERRORLOG table.

SP2 Error

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc sperrorlog;
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------

 USERNAME                                           VARCHAR2(256)
 TIMESTAMP                                          TIMESTAMP(6)
 SCRIPT                                             VARCHAR2(1024)
 IDENTIFIER                                         VARCHAR2(256)
 MESSAGE                                            CLOB
 STATEMENT                                          CLOB

SQL> truncate table sperrorlog;

Table truncated.

SQL> set errorlogging on;
SQL> selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.
SQL> select timestamp, username, script, statement, message from sperrorlog;

TIMESTAMP
---------------------------------------------------------------------------
USERNAME
--------------------------------------------------------------------------------

SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

11-SEP-13 01.27.29.000000 AM
SCOTT


TIMESTAMP
---------------------------------------------------------------------------
USERNAME
--------------------------------------------------------------------------------

SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.

ORA Error

SQL> truncate table sperrorlog;

Table truncated.

SQL> select * from dula;
select * from dula
              *
ERROR at line 1:
ORA-00942: table or view does not exist

SQL> select timestamp, username, script, statement, message from sperrorlog;

TIMESTAMP
---------------------------------------------------------------------------
USERNAME
--------------------------------------------------------------------------------

SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

11-SEP-13 01.36.08.000000 AM
SCOTT


TIMESTAMP
---------------------------------------------------------------------------
USERNAME
--------------------------------------------------------------------------------

SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

select * from dula
ORA-00942: table or view does not exist

Like shown above, you can capture PLS errors too.

If you want to execute it through scripts, you can do it like this, and later spool the errors into a file. I kept these three lines in the sperrorlog_test.sql file -

truncate table sperrorlog;
selct * from dual;
select * from dula;

SQL> @D:sperrorlog_test.sql;

Table truncated.

SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.
select * from dula
              *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> select TIMESTAMP, SCRIPT, STATEMENT, MESSAGE from sperrorlog;

TIMESTAMP
---------------------------------------------------------------------------
SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

11-SEP-13 01.50.17.000000 AM

D:sperrorlog_test.sql;
SP2-0734: unknown command beginning "D:sperror..." - rest of line ignored.


TIMESTAMP
---------------------------------------------------------------------------
SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

11-SEP-13 01.50.27.000000 AM
D:sperrorlog_test.sql
selct * from dual;
SP2-0734: unknown command beginning "selct * fr..." - rest of line ignored.


TIMESTAMP
---------------------------------------------------------------------------
SCRIPT
--------------------------------------------------------------------------------

STATEMENT
--------------------------------------------------------------------------------

MESSAGE
--------------------------------------------------------------------------------

11-SEP-13 01.50.27.000000 AM
D:sperrorlog_test.sql
select * from dula
ORA-00942: table or view does not exist

SQL>

Check Oracle documentation on SPERRORLOG.

In addition to above, if you want to be particularly specific about each session’s error to be spooled into a file you could do this -

SQL> set errorlogging on identifier my_session_identifier

Above mentioned IDENTIFIER keyword becomes a column in SPERRORLOG table. It would get populated with the string value “my_session_identifier”. Now you just need to do this -
SQL> select timestamp, username, script, statement, message
2 from sperrorlog
3 where identifier = 'my_session_identifier';

To spool the session specific errors into a file, just do this -

SQL> spool error.log
SQL> select timestamp, username, script, statement, message
2 from sperrorlog
3 where identifier = 'my_session_identifier';
SQL> spool off




sql

MySQL optimization

I have noticed a lot of queries of SproutSearch's main database table are getting slow as SproutSearch passes 8 million indexed blogs. I finally decided to do something about it after trying to add alter this table. I attempted to add a column that keeps track of the date and time of each blog's most recent post. The alter table command ran for at least 8 hours, and then MySQL either crashed or the admins killed my process. I attempted this a second time without making a new index, which also failed. I figured I would just create a new table with the extra column and write a program to slowly copy everything over. The first version of this PHP program queried 10,000 rows of data from the old table and inserted them one by one into the new table. I set up a cron job to run this every 10 minutes. Once the new table started getting big, the cron jobs were overlapping, some records were not copied, and copy processes started backing up. It dawned on me that I'd better learn something about MySQL optimization. I read some online articles and decided to try using mysqli_multi_query to copy the records. That would reduce the network overhead. The program ran several times faster but I wanted to look into other methods. I tried using prepared statements, which wasn't much better. I found this excellent article (http://www.informit.com/articles/article.asp?p=377652&seqNum=4&rl=1) which said if I use the insert format like: insert into table (column1, column2) values(val1, val2), (val1, val2)... MySQL wouldn't have to flush the index after every insert. I made my program create a giant insert statement in this format. I tried it out and it only took a few seconds when the new table was empty. I modified the program to run 10 batches of 10,000 records, which would take a few minutes. This program has been running for a few days, and all my data is finally in the new table. I am still having problems with the table being locked during lengthy select statements. It causes certain pages to hang for a long time. I am now copying all the data from a MYISAM table to a INNODB table because it has row level locking.




sql

AGRESSO SQL Functions Reference 2.0

AGRESSO SQL Functions Reference 2.0 released. This freeware reference provides a list of the functions available in AGRESSO SQL, their meanings and syntax. Version 2.0 contains many new entries; built-in hyperlinks allow easy navigation between entries.




sql

New Release SQL2RSS

This new SQL2RSS script allows users to easily convert information that is stored in a database into an RSS feed, for syndication and distribution.

When using SQL2RSS the publisher has complete control over the content in the resulting RSS feed. Administrators and publishers control the database query which allows them the flexibility to determine what data is inserted into the RSS feeds from the database.




sql

New RSS2SQL Script

NotePage, Inc. is pleased to announce RSS2SQL, a new php script that allows users to converts RSS feeds to databases.

This new RSS2SQL script allows users to easily store information that is contained in an RSS feed into a MySQL database. When using the RSS2SQL script, the publisher can control which fields of data from the RSS feed are stored in the database, which allows the flexibility to determine what data is inserted into the database from each feed.

The RSS2SQL script joins six existing scripts in FeedForAll's RSS Scripts directory. Access to the scripts directory is freely available to all registered users of FeedForAll and FeedForAll Mac, or a subscription to the scripts directory can be purchased for $ 29.95.




sql

SQL2RSS

The sql2rss.php script allows you to easily create rss feeds from SQL databases. The script currently supports the conversion of MySQL databases to RSS feeds.


SQL2RSS




sql

RSS2SQL

The RSS2SQL script is used to create MySQL databases from RSS feeds.




sql

SQL2RSS Converts MySQL to RSS Feeds

Convert MySQL databases to rss feeds using SQL2RSS.

SQL2RSS




sql

Using Pandas and SQL Together for Data Analysis

In this tutorial, we’ll explore when and how SQL functionality can be integrated within the Pandas framework, as well as its limitations.




sql

Netskills course on Database Design and SQL.

Details are now available of the Netskills course on 'Database Design and SQL' to be held on Tuesday 13th June 2006 at the University of Bath are now available. This course is an ideal warm up for the Institutional Web Management Workshop. [2006-04-27]




sql

Episode 137: SQL with Jim Melton

In this episode, Arno talks to Jim Melton about the SQL programming language. In addition to covering the concepts and ideas behind SQL, Jim shares stories and insights based on his many years' experience as SQL specification lead.




sql

Episode 165: NoSQL and MongoDB with Dwight Merriman

Dwight Merriman talks with Robert about the emerging NoSQL movement, the three types of non-relational data stores, Brewer's CAP theorem, the weaker consistency guarantees that can be made in a distributed database, document-oriented data stores, the data storage needs of modern web applications, and the open source MongoDB.




sql

SE-Radio Episode 362: Simon Riggs on Advanced Features of PostgreSQL

Simon Riggs, founder and CTO of 2nd Quadrant, discusses the advanced features of the Postgres database, that allow developers to focus on applications whilst the database does the heavy lifting of handling large and diverse quantities of data.




sql

Episode 433: Jay Kreps on ksqlDB

Jay Kreps, CEO and Co-founder of Confluent discusses ksqlDB which is a database built specifically for stream processing applications to query streaming events in Kafka with SQL like interface.




sql

Episode 510: Deepthi Sigireddi on How Vitess Scales MySQL

In this episode, Deepthi Sigireddi of the Cloud Native Computing Foundation (CNCF) spoke with SE Radio host Nikhil Krishna about how Vitess scales MySQL. They discuss the design and architecture of the product; how Vitess impacts modern data problems;...




sql

SE Radio 560: Sugu Sougoumarane on Distributed SQL with Vitess

Sugu Sougoumarane discusses how to face the challenges of horizontally scaling MySQL databases through the Vitess distribution engine and Planetscale, a service built on top of Vitess. The journey began with the growing pains of scale at YouTube around the time of Google’s acquisition of the video service. This episode explores ideas about topology management, sharding, Paxos, connection pooling, and how Vitess handles large transactions while abstracting complexity from the application layer.




sql

MySQL Performance: Linux I/O

some useful tests and data that help to validate a lot of what we already do at craigslist




sql

Heating up with MySQL

A conversation with both MySQL community managers on the latest in the community and the technology

In this conversation I talk with both MySQL community managers, David Stokes from Texas and Frédéric Descamps from Belgium.

I met both of them in Brussels in February 2020 during the preFOSDEM 2020 MySQL Days. This is where the MySQL community comes together for two days of intensive technical sessions before going to the premier European open source conference — FOSDEM (Free and Open Source Software Developers' European Meeting). Yes, there's that much content to deal with so you need a conference before the conference!

Before the pandemic, Oracle used to send many engineers from various open source projects at the company to FOSDEM to participate in the event and to engage the other 20K developers from around the world there. And the MySQL team at Oracle was always a big part of the conference. 

Now in mid 2021 tech conferences still haven't really started up again, so development and community building takes place virtually. That's where I pick up the conversation with David and Frédéric.

MySQL technology is hot these days. There is a new release out recently with many fixes and enhancements, but even before that the project released a new feature called HeatWave. That's a real-time analytics MySQL Database Service in Oracle Cloud Infrastructure (OCI). It's easily enabled and disabled on demand with no application changes and can result in performance improvements of about 400 times. And it's only found in OCI. David and Frédéric said the early customer and developer reaction to the technology has been "staggering and eye popping." But in the conversation the guys also talked about more bits in MySQL, such as high availability, clustering, the shell, and the Kubernetes operator. 

We also explored some of the dynamics going on in the MySQL community, such as the steep learning curves that all developers have to deal with these days due to the ever increasing rate of change in software development. Some of the younger developers are discovering that some of the tools and techniques that were considered "old" are coming back because, well, they just work. On the other hand, younger developers seem to tolerate better all the exclusively virtual events these days than some of the older developers. So, there are multiple simultaneous trends taking place, just as there is in any aspect of life. So Frédéric and David have a good sense of the community and hope that when live events return they can bring together the older developers with all the new developers so the community can gel again in physical experiences.

And finally, they both said that in another six months there will be some really cool technology coming out. They wouldn't say more about that. But I'd take their word on that tease, though. The last time they told me that some cool stuff was coming they released HeatWave shortly thereafter.

Video on YouTube
https://youtu.be/T3TK23THWKw 

David Stokes is a MySQL Community Manager in Texas.
https://twitter.com/stoker

Frédéric Descamps is a MySQL community Manager in Belgium.
https://twitter.com/lefred

Jim Grisanzio is a Sr. Community Manager in Oracle Developer Relations
https://twitter.com/jimgris

MySQL Community
https://dev.mysql.com/

Announcing July 2021 Releases featuring MySQL 8.0.26
https://blogs.oracle.com/mysql/announcing-july-2021-releases-featuring-mysql-8026

HeatWave
https://www.oracle.com/mysql/heatwave/

MySQL Database Service—New HeatWave Innovations
https://www.oracle.com/events/live/mysql-heatwave-innovations/

Frédéric Descamps Previews Oracle Developer Live — MySQL | October 2020
https://youtu.be/6i1WreKco9E

Dave Stokes and Frederic Descamps on Contributing to the MySQL Project | June 2020
https://youtu.be/NUU4W8O3teE

preFOSDEM 2020 MySQL Days | February 2020
https://www.youtube.com/playlist?list=PLwfImoydiSsuHfrVWcq8qJ_cz8o6vuoRo

 

Cheers
Jim
--
Jim Grisanzio, Sr. Community Manager, Oracle Developer Relations
https://twitter.com/jimgris

 





sql

PHP CRUD Operations with PostgreSQL Server

CRUD (Create, Read, Update, and Delete) operations are used in the web application for the data manipulation in the database. There are four basic operations involved in the CRUD functionality that help to manage data with the database. We have already shared the tutorial to perform create (insert), read (select), update, and delete operations in PHP CRUD Operations with MySQL. In this tutorial, we will build PHP CRUD application with PostgreSQL server. PostgreSQL also known as Postgres is a relational database management system (RDBMS). The PostgreSQL database is open-source and free to use. We will connect with the PostgreSQL Server

The post PHP CRUD Operations with PostgreSQL Server appeared first on CodexWorld.




sql

CLI for SQLite Databases with auto-completion and syntax highlighting




sql

Don't Do This - PostgreSQL wiki




sql

Google’s AI Tool Big Sleep Finds Zero-Day Vulnerability in SQLite Database Engine

Google said it discovered a zero-day vulnerability in the SQLite open-source database engine using its large language model (LLM) assisted framework called Big Sleep (formerly Project Naptime). The tech giant described the development as the "first real-world vulnerability" uncovered using the artificial intelligence (AI) agent. "We believe this is the first public example of an AI agent finding




sql

Problem Notes for SAS®9 - 66438: You see the message "The informat $ could not be loaded, probably due to insufficient memory" after attempting to insert data into a MySQL database

For data that is being loaded from a SAS Stored Process Server, an insertion process might fail to a MySQL database with a warning, as well as an error message that says "During insert: Incorrect datetime value…"




sql

Problem Notes for SAS®9 - 66505: The OBS= option does not generate a limit clause when you use SAS/ACCESS Interface to PostgreSQL to access a Yellowbrick database

When you use SAS/ACCESS Interface to PostgreSQL to query a Yellowbrick database, the SAS OBS= option is not generating a limit clause on the query that is passed to the database. Click the



sql

Python Integration to SAS® Viya® - Executing SQL on Snowflake

Welcome to the continuation of my series Getting Started with Python Integration to SAS Viya. Given the exciting developments around SAS & Snowflake, I'm eager to demonstrate how to effortlessly connect Snowflake to the massively parallel processing CAS server in SAS Viya with the Python SWAT package. If you're interested [...]

Python Integration to SAS® Viya® - Executing SQL on Snowflake was published on SAS Users.





sql

FDA approves biosimilars: ustekinumab Otulfi and eculizumab Epysqli

<p>The US Food and Drug Administration (FDA) granted approval for two&nbsp;biosimilars, Formycon’s FYB202/Otulfi (ustekinumab-aauz) and Samsung Bioepis’ Soliris biosimilar, Epysqli (eculizumab-aagh), on 27 September and 22 July 2024, respectively. FYB202/Otulfi, a biosimilar referencing&nbsp;Johnson &amp; Johnson’s Stelara, while Epysqli is a biosimilar referencing Alexion’s Soliris.</p>




sql

Hiring For Technical Person ( .Net and SQL Server) @ Navi Mumbai

Company: Talent Corner Hr Services Private Limited
Experience: 3 to 5
location: Navi Mumbai, Mumbai
Ref: 24756055
Summary: Job Description : Job Description 1. Master in doing technical tasks on .Net and SQL Server 2. Data pulling for call centres from SQL Servers or as per his knowledge we can provide all the data in....




sql

DL_Stats Cross Site Scripting / Admin Bypass / SQL Injection

DL_Stats suffers from cross site scripting, arbitrary administrative access and remote SQL injection vulnerabilities.




sql

PageDirector CMS SQL Injection / Add Administrator

PageDirector CMS suffers from add administrator and remote SQL injection vulnerabilities.




sql

phpBugTracker 1.7.5 XSS / SQL Injection / Auth Bypass

phpBugTracker 1.7.5 suffers from cross site scripting, authorization bypass, and SQL injection vulnerabilities.




sql

MySQL Lite Administrator Beta 1 Cross Site Scripting

MySQL Lite Administrator version Beta 1 suffers from multiple cross site scripting vulnerabilities.




sql

JSPMySQL Administrador 1 Cross Site Request Forgery / Cross Site Scripting

JSPMySQL Administrador version 1 suffers from cross site request forgery and cross site scripting vulnerabilities.




sql

How to support full Unicode in MySQL databases

Are you using MySQL’s utf8 charset in your databases? In this write-up I’ll explain why you should switch to utf8mb4 instead, and how to do it.




sql

Using SQL extensibility for processing dynamically typed XML data in XQuery queries

XQuery queries that include functions that operate on dynamically typed XML data are rewritten into compilable SQL constructs. XML data that is dynamically typed is XML data for which a specific XML data type cannot be determined at compile time and in fact may vary. In general, XQuery queries are rewritten into SQL queries that use SQL constructs in lieu of XQuery constructs. The SQL constructs include an “SQL polymorphic function” that is defined or recognized by a database system as valid syntax for an SQL query. The rewritten query applies the XML data to the SQL polymorphic function, but the XML data has been typed as XMLType, a data type recognized by SQL standards.




sql

Find out what's new for federation in Big SQL V4.X

Since my last tutorial on the subject appeared, some improvements have been made in terms of simplifying the setup process of the federation feature in Big SQL and adding support for new data sources or newer versions. This tutorial will take you through the incremental changes to the simplified configuration in the different versions of Big SQL V4 up to V4.2.4. It will also touch briefly on the more advanced topic of performance.




sql

Build a CRM/Sales System (WEB BASED) | PHP | Website Design | HTML | MySQL | Software Architecture | Freelancer

#architektura #architekt #dom #design




sql

Build me a website | PHP | Website Design | HTML | Graphic Design | MySQL | Freelancer

#architektura #architekt #dom #design




sql

Beekeeper Studio | Free SQL editor and database manager for MySQL, Postgres, SQLite, and SQL Server. Available for Windows, Mac, and Linux.




sql

MongoDB and Rockset link arms to figure out SQL-to-NoSQL application integration

NoSQL, no problem for Facebook-originating RocksDB

MongoDB and fellow database biz Rockset have integrated products in a bid to make it easier to work with the NoSQL database through standard relational database query language SQL.…




sql

The point of containers is they aren't VMs, yet Microsoft licenses SQL Server in containers as if they were VMs

And now to avoid container sprawl costing you plenty

Microsoft has slipped out licensing details for SQL Server running in containers and it will likely encourage developers to be pretty diligent in their use of Redmond’s database.…




sql

Wimpie Nortje: Database migration libraries for PostgreSQL.

It may be tempting at the start of a new project to create the first database tables manually, or write SQL scripts that you run manually, especially when you first have to spend a significant amount of time on sifting through all the migration libraries and then some more to get it working properly.

Going through this process did slow me down at the start of the project but I was determined to use a migration tool because hunting inexplicable bugs that only happen in production just to find out there is a definition mismatch between the production and development databases is not fun. Using such a tool also motivates you to write both the setup and teardown steps for each table while the current design is still fresh in your mind.

At first I considered a standalone migration tool because I expect them to be very good at that single task. However, learning the idiosyncrasies of a new tool and trying to make it fit seamlessly into my development workflow seemed like more trouble than it is worth.

I decided to stick with a Common Lisp library and found the following seven that work with PostgreSQL and/or Postmodern:

I quickly discounted Crane and Mito because they are ORM (Object Relational Mapper) libraries which are way more complex than a dedicated migration library. Development on Crane have stalled some time ago and I don't feel it is mature enough for frictionless use yet. Mito declares itself as being in Alpha state; also not mature enough yet.

I only stumbled onto cl-mgr and Orizuru-orm long after making my decision so I did not investigate them seriously. Orizuru-orm is in any case an ORM which I would have discounted because it is too complex for my needs. CL-mgr looks simple, which is a good thing. It is based on cl-dbi which makes it a good candidate if you foresee switching databases but even if I discovered it sooner I would have discounted it for the same reason as CL-migrations.

CL-migrations looks very promising. It is a simple library focusing only on migrations. It uses clsql to interface with the database which bothered me because I already committed to using Postmodern and I try to avoid adding a lot of unused code to my projects. The positive side is that it interfaces to many different databases so it is a good candidate if you are not committed to using Postmodern. It is also a stable code base with no outstanding bug reports.

The two projects I focused on was Postmodern-passenger-pigeon and Database-migrations because they both use Postmodern for a database interface.

Postmodern-passenger-pigeon was in active development at the time and it seemed safer to use than Database-migrations because it can do dry runs, which is a very nice feature when you are upgrading your production database and face the possibility of losing data when things go awry. Unfortunately I could not get it working within a reasonable amount of time.

I finally settled on Database-migrations. It is a small code base, focused on one task, it is mature and it uses Postmodern so it does not pull in a whole new database interface into my project. There are however some less positive issues.

The first issue is a hindrance during development. Every time the migrations ASDF system (or the file containing it, as ASDF prefers that all systems be defined in a single file) is recompiled it adds all the defined migrations to the migrations list. Though each one will only be applied once to the DB it is still bothersome. One can then clear the list with (setf database-migrations::*migrations* nil) but then only newly modified migration files will be added. The solution then is to touch the .asd file after clearing the migrations list.

The second negative point is quite dangerous. The downgrade function takes a target version as parameter, with a default target of 0. This means that if you execute downgrade without specifying a target version you delete your whole database.

I am currently using Database-migrations and it works well for me. If for some reason I need to switch I will use cl-migrations.

Using Database-migrations

To address the danger of unintentionally deleting my database I created a wrapper function that does both upgrade and downgrade, and it requires a target version number.

Another practical issue I discovered is that upgrades and downgrades happen in the same order as they are defined in the migration file. If you create two tables in a single file where table 2 depends on table 1 then you can not revert / downgrade because Database-migrations will attempt to delete table 1 before table 2. The solution here is to use the def-queries-migration macro (instead of def-query-migration) which defines multiple queries simultaneously . If you get overwhelmed by a single definition that defines multiple tables the other option is to stick with one migration definition per file.




sql

oscon: High Availability in MySQL - how to pick a solution that best matches your use case http://t.co/PItdw0maTj @h_ingo #oscon #tutorial

oscon: High Availability in MySQL - how to pick a solution that best matches your use case http://t.co/PItdw0maTj @h_ingo #oscon #tutorial




sql

Mastering Kafka Streams and ksqlDB

Working with unbounded and fast-moving data streams has historically been difficult. But with Kafka Streams and ksqlDB, building stream processing applications is easy and fun. This practical guide explores the world of real-time data systems through the lens of these popular technologies and explains important stream processing concepts against a backdrop of interesting business problems.




sql

Problem Notes for SAS®9 - 65939: "ERROR: Unable to transcode data to/from UCS-2 encoding" occurs when you run an SQL query using SAS/ACCESS Interface to ODBC on SAS 9.4M5 with UTF-8

When you run an SQL query using SAS/ACCESS Interface to ODBC under the following conditions, you might receive an error: You run SAS 9.4M5 (TS1M5) or SAS 9.4M6 (TS1M6)  i




sql

Problem Notes for SAS®9 - 35066: When a bulk-loading process fails with "SQL*Loader 2026" error, error message appears as a warning in the SAS log

If a bulk-loading process fails when you use SAS with SAS/ACCESS Interface to Oracle, you will receive the warning: "WARNING: All or some rows were rejected/discarded.: The actual error is "SQL*Loader-2026: The load was aborted because SQL




sql

Problem Notes for SAS®9 - 65682: Running FedSQL with an Oracle table is slow, even when you use a LIMIT clause

When you query an Oracle table and use the LIMIT clause using either SAS  Federation Server or FedSQL, a row limit is not passed to the database. In this scenario, a Full Article