Oracle Database 12c : Tom’s Top 12 Things About the Latest Generation of Database Technology

Oracle 2013. 8. 21. 18:14

Oracle Database 12c : Tom’s Top 12 Things About the Latest Generation of Database Technology

Tom’s Top 12 Things About the Latest Generation of Database Technology
  • Functions (and procedures used within those functions) can be defined in the WITH clause. Performance boost compared to regular unit defintion. Pragma to allow regular functions to benefit from these performance benefits.
  • Default value of column can use a sequence.nextval.
  • Identity columns : Multiple levels of control of how it is used. Can use simple or more complex syntax.
  • Metadata only default of optional columns. Previous versions this was possible only for mandatory columns.
  • VARCHAR2(32767) in the database. Less than 4K is stored inline. More than 4K is stored out of line, similar to LOB, but simpler. Not available by default.
  • Top-N now using Row limiting clause eg. “OFFSET 10 ROWS FETCH FIRST 10 ROWS ONLY”. Similar to mySQL syntax.
  • Row pattern matching. Quite a lot of new analytic syntax here.
  • Partitioning Improvements:
    - Asynchronous Global Index maintenance for DROP and TRUNCATE. Command returns instantly, but index cleanup happens later.
    - Cascade for TRUNCATE and EXCHANGE partition.
    - Multiple partition operations in a single DDL
    - Online move of a partition(without DBMS_REDEFINTIION).
    - Interval  + Reference Partitioning.
  • Adaptive Execution Plans:
    - If the optimizer notices the cardinality is not what is expected, so the current plan is not optimal, it can alter subsequent plan operations to take allow for the differences between the estimated and actual cardinalities.
    - The stats gathered during this process are persisted as Adaptive Statistics, so future decisions can benefit from this.
    - You will see STATISTICS COLLECTOR steps in the SQL Trace. Can make the trace harder to read as it can contain information about the expected plan and the actual plan.
  • Enhanced Statistics:
    -  Some dynamic sampling operations are persistent, so they are not lost when the SQL is aged out.
    - Hybrid histograms. When the number of distinct values is greater than 254, “almost popular” values can get “lost” in the mix. A single bucket can now store the popularity of than value, effectively increasing the number of buckets, without actually increasing it.
    - Possible the max number of buckets can be increased based on a parameter. (demo grounds)
    - Statistics gathered during loads. CTAS and INSERT … SELECT automatically compute stats.
    - Global temporary tables can have “session private statistics”. Previously, we had one-size-fits-all.
  • Temporary Undo (ALTER SESSION SET temp_undo_enabled=true):
    - UNDO for temporary tables can now be managed in TEMP, rather than the regular UNDO tablespace.
    - Reduces contents of regular UNDO, allowing better flashback operations.
    - Reduces the size of redo associated with recovering the regular UNDO tablespace.
  • Data Optimization:
    - Information Lifecycle Management: Uses heat map. Colder data is compressed and moved to lower tier storage. Controlled by declarative DDL policy.
  • Transaction Guard:
    - If a failure happens, your application may not know the actual status of a transaction. If it was successful, issuing it again could cause a duplication transaction.
    - In these cases, you can mark a transaction with an “unknown” state (as far as the application is concerned) as failed, so even though they may have been successful, it will never be considered, or recovered. You’ve guaranteed the outcome.
  • Pluggable database:
    - Oracle provided metadata and data is kept in the container database (CDB).
    - User metadata and data is kept in the plugable database (PDB) .
    - One container can have multiple plugable databases.
    - No namespace clashes. Allows public synonyms and database links at the PDB level, rather than the CBD level.
    - Cloning is quick and simple as only user metadata and data needs to be cloned.
    - Upgrades have the potential to just unplug from old version (12cR1) to new version (12cR2).
    - Reduce total resource usage is reduced on lower use databases.

Oracle Database Optimizer: An Insider’s View of How the Optimizer Works
Oracle database 12c is the first step on the way to making an adaptive, or self-learning optimiser.
Alternative subplans are precomputed and stored in the cursor, so no new hard parsing will be needed as part of the adaption of an already executing plan. Statistics collectors are included in the plan execution. If the collectors cross a threshold, the plan might switch during execution from a nested loops to a hash join.
You can see information about the adaptive actions that have occurred using the DBMS_XPLAN package, with the format of “+all_dyn_plan +adaptive”. If a plan has been adapted, you will see it indicated in the  v$sql.is_resolved_dynamic_plan column.
If this functionality scares you, you can turn it off using the OPTIMIZER_APADPTIVE_REPORTING_ONLY parameter. Same work is done, but no actual adaptive action is taken.
During parallel execution, collectors can influence the distribution method (HASH > Distribution). Shown in the plan as the HYBRID HASH operations.
Dynamic statistics replace dynamic sampling. The resulting stats are cached as SHARED DYNAMIC STATS specific for the statement, including the bind values. This information is used for any session using the same statement.
Cardinality feedback can be used to re-optimize subsequent operations. Join statistics are monitored. Works with adaptive cursor sharing. Persisted on disk. New column v$sql.is_reoptimizable shows that a subsequent run will take this into consideration. Collectors are kept, even if the SQL statement is killed part way through. The plan shows that cardinality feedback is used.
SQL Plan Directives are based on a SQL phrase (a specific join) rather than the whole statement. Cached in the directive cache, but persisted in the SYSAUX tablespace. Managed using the DBMS_SPD package.
Information gathered by the optimizer, may prompt automatic creation of column groups, so next time stats are gathered, the extended stats will be gathered.

What’s New in Security in the Latest Generation of Database Technology
  • Privilege Analysis:
    - Track direct privileges and privileges via roles being used, so you can determine the least privileges needed.
    - Monitoring controlled using DBMS_PRIVILEGE_CAPTURE.
    - Report what is used and what is not used.
  • Data Redaction: A variation in column masking of VPD, but it doesn’t just blank the value and still allows queries against the column in the WHERE clause.
  • Enhanced Security of Audit Trail:
    - Single unified audit trail.
    - Extension of the audit management package.
    - Multiple audit management privileges.
  • Encryption Enhancements:
    - Allow SQL creation and management of wallets, rather than command line utilities. Allows easier remote management.
    - Export and import wallets/keys between plugable databases.
    - Storage of wallets in ASM.
    - Much more…
  • Code-Based Access Control (CBAC):
    - A PL/SQL unit can have roles granted to it.
    - When the unit runs, any dynamic SQL running can have the privileges granted via the role.
    - Doesn’t affect compile time, so focussing very much on dynamic SQL.
    - Useful on invoker rights, since now the PL/SQL can run with user privileges and explicitly granted roles for the unit.
  • Invoker Rights:
    - INHERITED RIGHTS : Control accidental privilege escalation when a privileged user calls an invoker rights unit containing malicious code.
    - Invokers rights for views.
  • Separation of Duties:
    - SYSDBA – God
    - SYSOPER – More limited than SYSDBA, but still very powerful.
    - SYSBACKUP – Just enough to do a backup.
    - SYSDG – Just enough for data guard administration.
    - SYSKM – Just enough to perform basic key management tasks.
    - Roles for audit management.
출처 : ORACLE-BASE

 

:     

TISTORY에 Login하려면 여기를 누르세요.


가트너가 밝히는 열 가지 IT 보안의 미신들

IT News 2013. 6. 13. 16:32

가트너 애널리스트 제이 헤이저는 정보 보안에서 기업에게 닥친 위협이나 데이터 자산을 보호하는데 이용되어 온 기술과 관련해 많은 '오해'와 '과장'이 존재한다고 지적했다.

이런 오해들은 때론 보안 전문가 사이에서 널리 받아들여지며 '미신'으로까지 발전하기도 한다.

헤이저는 미국 메릴랜드주 내셔널 하버에서 진행된 가트너 보안 및 위협 관리 회담(Gartner Security & Risk Management Summit)에서 '10 가지 대표적 IT 보안 미신'을 소개하며 이것이 데이터 보안 노력에, 그리고 나아가서는 최고 정보보안 책임자(CISO, Chief Information Security Officer)와 비즈니스 간의 신뢰 구축에 악영향을 미칠 수 있음을 지적했다.

헤이저가 소개한 보안 미신들은 다음과 같다.

미신 No 1. '난 괜찮을 꺼야'
원인: 양치기 소년의 허풍에 질려 진짜 늑대 출현에 대응치 못한 주민들처럼, 과장된 위협에 마취된 기업들은 정보 보안 그룹에 가장 싼 비용으로 기본적 대비책을 마련할 것만을 요구하게 된다.

해결책: 보안 위협 수준을 등급별로 분류해 기업에 직면한 진짜 문제가 무엇인지 확인하라.

미신 No 2. '정보 보안 예산은 IT 지출의 10%면 충분하다'
원인: 반대의 의미로 현실과 거리가 먼 얘기다. 가트너의 조사에 따르면 실제 정보 보안 예산이 차지하는 평균 비중은 IT 지출의 5% 정도에 불과했다.

해결책: 제발 데이터를 가지고 이야기해라.

미신 No 3. '보안 위협은 정량화 할 수 있다'
원인: '머릿수가 많은 쪽이 최고'라는 생각에서 비롯된 '수치 만능주의 문화'의 결과물이다. 이런 미신을 믿는 이들은 엑셀 스프레드시트 한 장이면 보안 예산을 확보할 수 있다고 착각하곤 한다.

해결책: IT 관련 위협은 사실 비즈니스와 직결된 문제임을 증명할, 비수치적 표현법을 개발하라.

미신 No 4. '물리적 보안(혹은 SSL)이 갖춰져 있으니, 데이터의 안전성을 의심하지 말라'
원인: 위협에 대한 이해 부족이 가져온 비현실적 믿음에서 비롯됐다.

해결책: 구매하려는 보안 툴이 기업이 보유한 데이터 유형의 보호에 적합한 것인지 검토하라.

미신 No 5. '비밀번호를 복잡하게 설정하고 정기적으로 변경하면 위협이 줄어든다'
원인: 이는 게으른 관리자들이 가장 쉽게 내뱉는 말이다. 완벽한 비밀번호는 존재하지 않음을, 그리고 오늘날 진짜 위험은 크래킹(cracking)이 아닌 스니핑(sniffing)임을 우리 모두는 알고 있다.

해결책: 안심하고 손 놓지 말라.

미신 No 6. 'CISO를 IT 외부로 보내면 보안 수준은 자동적으로 향상된다'
원인: 모양만 바뀌면 문화가 바뀐다고 생각하는 해묵은 책임 회피법이다.

해결책: 보안 프로그램을 약하게 만든 진짜 원인을 분석해라.

미신 No 7. 'CISO는 보안 훈련에만 집착한다'
원인: 이 역시 책임을 회피하려는 태도다. 보안 위협을 다른 이의 문제로 치부하고 싶은 기업들이 지목한 대상이 CISO인 것일 뿐이다. 하지만 기업 역시 CISO가 자신들에게 뭘 해야 할 지를 알려줄 것이라고 생각하지 않는다.

해결책: 정보 보안 프로그램이 기업 문화 전반에 녹아 들도록 하라.

미신 No 8. '이 툴을 구매해 여기에 넣으면 모든 문제가 해결될 것이다'
원인: 스스로는 해결하지 못 하지만 외부의 누군가는 마법 같은 해결책을 제시할 수 있으리란 믿음은 비현실적이다.

해결책: 체계적인 위협 분석과 위협 해결의 우선 순위 설정, 장기적 보안 계획이 필요하다.

미신 No 9. '제대로 된 정책만 확립되면 앞길은 탄탄대로다'
원인: 말이 필요 없는 비현실적인 생각.

해결책: 관리 책임을 명확히 하고 집중해야 할 문제를 선택하는 것은 모든 일의 핵심이다.

미신 No 10. '주요 파일을 지키는 최고의 방법은 암호화다'
원인: 암호화는 잘 이뤄진다면 분명 상당한 효과를 발휘한다. 하지만 이 고난이도 기술을 단순히 규제 문제 해결을 위한 '마법의 탄환'으로 바라본다면 분명 득보다는 실을 더 많이 경험할 것이다.

해결책: 결정을 내리기에 앞서 자신이 암호화에 충분한 경험이 있는 지를 우선 확인하라.

마지막으로 헤이저는 이처럼 많은 미신들이 유행하는 원인으로 익숙하지 않은 상황에 과도하게 반응하거나 책임을 회피하고자 하는 인간의 본성을 꼽았다.

헤이저는 "책임 전가는 관료주의적 위협 관리 방법 가운데 하나다. 그러나 CISO가 가만히 앉아 모든 비난을 감내해야 할 이유는 어디에도 없다"고 강조했다. editor@itworld.co.kr


출처 : http://www.itworld.co.kr/news/82337

:     

TISTORY에 Login하려면 여기를 누르세요.


A Comparison of Oracle's DATE and TIMESTAMP Datatypes

Oracle 2013. 4. 9. 09:50

A Comparison of Oracle's DATE and TIMESTAMP Datatypes

By James Koopmann

If you want to store date and time information in Oracle, you really only have two different options for the column's datatype. Lets take a quick look at these two datatypes and what they offer.

DATE datatype

This is the datatype that we are all too familiar with when we think about representing date and time values. It has the ability to store the month, day, year, century, hours, minutes, and seconds. It is typically good for representing data for when something has happened or should happen in the future. The problem with the DATE datatype is its' granularity when trying to determine a time interval between two events when the events happen within a second of each other. This issue is solved later in this article when we discuss the TIMESTAMP datatype. In order to represent the date stored in a more readable format, the TO_CHAR function has traditionally been wrapped around the date as in Listing A.

 

LISTING A:

Formatting a date

SQL> SELECT TO_CHAR(date1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table;
Date
---------------------------
06/20/2003 16:55:14
06/26/2003 11:16:36

 

About the only trouble I have seen people get into when using the DATE datatype is doing arithmetic on the column in order to figure out the number of years, weeks, days, hours, and seconds between two dates. What needs to be realized when doing the calculation is that when you do subtraction between dates, you get a number that represents the number of days. You should then multiply that number by the number of seconds in a day (86400) before you continue with calculations to determine the interval with which you are concerned. Check out Listing B for my solution on how to extract the individual time intervals for a subtraction of two dates. I am aware that the fractions could be reduced but I wanted to show all the numbers to emphasize the calculation.

 

LISTING B:

Determine the interval breakdown between two dates for a DATE datatype

  1         SELECT TO_CHAR(date1,'MMDDYYYY:HH24:MI:SS') date1,
  2         TO_CHAR(date2,'MMDDYYYY:HH24:MI:SS') date2,
  3         trunc(86400*(date2-date1))-
  4         60*(trunc((86400*(date2-date1))/60)) seconds,
  5         trunc((86400*(date2-date1))/60)-
  6         60*(trunc(((86400*(date2-date1))/60)/60)) minutes,
  7         trunc(((86400*(date2-date1))/60)/60)-
  8         24*(trunc((((86400*(date2-date1))/60)/60)/24)) hours,
  9         trunc((((86400*(date2-date1))/60)/60)/24) days,
 10         trunc(((((86400*(date2-date1))/60)/60)/24)/7) weeks
 11*        FROM date_table
DATE1             DATE2                SECONDS    MINUTES      HOURS       DAYS      WEEKS
----------------- ----------------- ---------- ---------- ---------- ---------- ----------
06202003:16:55:14 07082003:11:22:57         43         27         18         17          2
06262003:11:16:36 07082003:11:22:57         21          6          0         12          1


 

TIMESTAMP datatype

One of the main problems with the DATE datatype was its' inability to be granular enough to determine which event might have happened first in relation to another event. Oracle has expanded on the DATE datatype and has given us the TIMESTAMP datatype which stores all the information that the DATE datatype stores, but also includes fractional seconds. If you want to convert a DATE datatype to a TIMESTAMP datatype format, just use the CAST function as I do in Listing C. As you can see, there is a fractional seconds part of '.000000' on the end of this conversion. This is only because when converting from the DATE datatype that does not have the fractional seconds it defaults to zeros and the display is defaulted to the default timestamp format (NLS_TIMESTAMP_FORMAT). If you are moving a DATE datatype column from one table to a TIMESTAMP datatype column of another table, all you need to do is a straight INSERTSELECT FROM and Oracle will do the conversion for you. Look at Listing D for a formatting of the new TIMESTAMP datatype where everything is the same as formatting the DATE datatype as we did in Listing A. Beware while the TO_CHAR function works with both datatypes, the TRUNC function will not work with a datatype of TIMESTAMP. This is a clear indication that the use of TIMESTAMP datatype should explicitly be used for date and times where a difference in time is of utmost importance, such that Oracle won't even let you compare like values. If you wanted to show the fractional seconds within a TIMESTAMP datatype, look at Listing E. In Listing E, we are only showing 3 place holders for the fractional seconds.

 

 

LISTING C:

Convert DATE datatype to TIMESTAMP datatype

SQL> SELECT CAST(date1 AS TIMESTAMP) "Date" FROM t;
Date
-----------------------------------------------------
20-JUN-03 04.55.14.000000 PM
26-JUN-03 11.16.36.000000 AM

 

 

Formatting of the TIMESTAMP datatype

  1  SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS') "Date" FROM date_table
Date
-------------------
06/20/2003 16:55:14
06/26/2003 11:16:36

 

 

LISTING E:

Formatting of the TIMESTAMP datatype with fractional seconds

 1  SELECT TO_CHAR(time1,'MM/DD/YYYY HH24:MI:SS:FF3') "Date" FROM date_table
Date
-----------------------
06/20/2003 16:55:14:000
06/26/2003 11:16:36:000

 

 

Calculating the time difference between two TIMESTAMP datatypes is much easier than the old DATE

datatype. Look at what happens when you just do straight subtraction of the columns in Listing F. As you can see, the results are much easier to recognize, 17days, 18hours, 27minutes, and 43seconds for the first row of output. This means no more worries about how many seconds in a day and all those cumbersome calculations. And therefore the calculations for getting the weeks, days, hours, minutes, and seconds becomes a matter of picking out the number by using the SUBSTR function as can be seen in Listing G.


LISTING F:

Straight subtraction of two TIMESTAMP datatypes

1  SELECT time1,  time2,  (time2-time1)
  2*   FROM date_table
TIME1                          TIME2                          (TIME2-TIME1)
------------------------------ ----------------------------   ----------------------
06/20/2003:16:55:14:000000     07/08/2003:11:22:57:000000     +000000017 18:27:43.000000
06/26/2003:11:16:36:000000     07/08/2003:11:22:57:000000     +000000012 00:06:21.000000

 

 

LISTING G:

Determine the interval breakdown between two dates for a TIMESTAMP datatype

  1  SELECT time1,
  2         time2,
  3         substr((time2-time1),instr((time2-time1),' ')+7,2)                 seconds,
  4         substr((time2-time1),instr((time2-time1),' ')+4,2)                 minutes,
  5         substr((time2-time1),instr((time2-time1),' ')+1,2)                 hours,
  6         trunc(to_number(substr((time2-time1),1,instr(time2-time1,' '))))   days,
  7         trunc(to_number(substr((time2-time1),1,instr(time2-time1,' ')))/7) weeks
  8*   FROM date_table
TIME1                       TIME2                      SECONDS MINUTES HOURS DAYS WEEKS
-------------------------   -------------------------- ------- ------- ----- ---- -----
06/20/2003:16:55:14:000000  07/08/2003:11:22:57:000000  43     27      18    17    2
06/26/2003:11:16:36:000000  07/08/2003:11:22:57:000000  21     06      00    12    1

 

 

System Date and Time

In order to get the system date and time returned in a DATE datatype, you can use the SYSDATE function such as :

SQL> SELECT SYSDATE FROM DUAL;

In order to get the system date and time returned in a TIMESTAMP datatype, you can use the SYSTIMESTAMP function such as:

SQL> SELECT SYSTIMESTAMP FROM DUAL;

You can set the initialization parameter FIXED_DATE to return a constant value for what is returned from the SYSDATE function. This is a great tool for testing date and time sensitive code. Just beware that this parameter has no effect on the SYSTIMESTAMP function. This can be seen in Listing H.

 

 

LISTING H:

Setting FIXED_DATE and effects on SYSDATE and SYSTIMESTAMP

SQL> ALTER SYSTEM SET fixed_date = '2003-01-01-10:00:00';
System altered.

SQL> select sysdate from dual;
SYSDATE
---------
01-JAN-03

SQL> select systimestamp from dual;
SYSTIMESTAMP
---------------------------------------------------------
09-JUL-03 11.05.02.519000 AM -06:00

 

 

When working with date and time, the options are clear. You have at your disposal the DATE and TIMESTAMP datatypes. Just be aware, while there are similarities, there are also differences that could create havoc if you try to convert to the more powerful TIMESTAMP datatype. Each of the two has strengths in simplicity and granularity. Choose wisely.

:     

TISTORY에 Login하려면 여기를 누르세요.